Test Report: Docker_Linux_crio 21682

                    
                      7a7892355cfa060afe2cc9d2507b1d1308b66169:2025-10-02:41740
                    
                

Test fail (56/166)

Order failed test Duration
27 TestAddons/Setup 514.8
38 TestErrorSpam/setup 500.5
47 TestFunctional/serial/StartWithProxy 498.28
49 TestFunctional/serial/SoftStart 366.38
51 TestFunctional/serial/KubectlGetPods 2.13
61 TestFunctional/serial/MinikubeKubectlCmd 2.13
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.17
63 TestFunctional/serial/ExtraConfig 733.93
64 TestFunctional/serial/ComponentHealth 1.85
67 TestFunctional/serial/InvalidService 0.06
70 TestFunctional/parallel/DashboardCmd 1.64
73 TestFunctional/parallel/StatusCmd 3.26
77 TestFunctional/parallel/ServiceCmdConnect 2.25
79 TestFunctional/parallel/PersistentVolumeClaim 241.53
83 TestFunctional/parallel/MySQL 1.33
89 TestFunctional/parallel/NodeLabels 1.29
94 TestFunctional/parallel/ServiceCmd/DeployApp 0.06
95 TestFunctional/parallel/ServiceCmd/List 0.3
96 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
97 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
98 TestFunctional/parallel/ServiceCmd/Format 0.34
100 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.34
101 TestFunctional/parallel/ServiceCmd/URL 0.32
104 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0.07
105 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 99.28
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.92
118 TestFunctional/parallel/MountCmd/any-port 2.16
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.29
122 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
124 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
125 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
141 TestMultiControlPlane/serial/StartCluster 502.39
142 TestMultiControlPlane/serial/DeployApp 114.71
143 TestMultiControlPlane/serial/PingHostFromPods 1.35
144 TestMultiControlPlane/serial/AddWorkerNode 1.52
145 TestMultiControlPlane/serial/NodeLabels 1.32
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.59
147 TestMultiControlPlane/serial/CopyFile 1.56
148 TestMultiControlPlane/serial/StopSecondaryNode 1.62
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.58
150 TestMultiControlPlane/serial/RestartSecondaryNode 37.53
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.6
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 369.87
153 TestMultiControlPlane/serial/DeleteSecondaryNode 1.8
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.59
155 TestMultiControlPlane/serial/StopCluster 1.37
156 TestMultiControlPlane/serial/RestartCluster 368.39
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.6
158 TestMultiControlPlane/serial/AddSecondaryNode 1.53
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.63
163 TestJSONOutput/start/Command 500.6
166 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestMinikubeProfile 501.26
221 TestMultiNode/serial/ValidateNameConflict 7200.06
x
+
TestAddons/Setup (514.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-436069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-436069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (8m34.760720029s)

                                                
                                                
-- stdout --
	* [addons-436069] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "addons-436069" primary control-plane node in "addons-436069" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:22:58.727245   85408 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:22:58.727479   85408 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:58.727487   85408 out.go:374] Setting ErrFile to fd 2...
	I1002 20:22:58.727491   85408 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:58.727706   85408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:22:58.728253   85408 out.go:368] Setting JSON to false
	I1002 20:22:58.729116   85408 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":7520,"bootTime":1759429059,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:22:58.729197   85408 start.go:140] virtualization: kvm guest
	I1002 20:22:58.731395   85408 out.go:179] * [addons-436069] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:22:58.732841   85408 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:22:58.732837   85408 notify.go:220] Checking for updates...
	I1002 20:22:58.734271   85408 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:22:58.735582   85408 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:22:58.736810   85408 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:22:58.738005   85408 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:22:58.739275   85408 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:22:58.741006   85408 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:22:58.764171   85408 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:22:58.764350   85408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:22:58.819134   85408 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-10-02 20:22:58.809433985 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:22:58.819241   85408 docker.go:318] overlay module found
	I1002 20:22:58.821699   85408 out.go:179] * Using the docker driver based on user configuration
	I1002 20:22:58.823158   85408 start.go:304] selected driver: docker
	I1002 20:22:58.823179   85408 start.go:924] validating driver "docker" against <nil>
	I1002 20:22:58.823193   85408 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:22:58.823929   85408 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:22:58.880114   85408 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:51 SystemTime:2025-10-02 20:22:58.869500674 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:22:58.880257   85408 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:22:58.880471   85408 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:22:58.882165   85408 out.go:179] * Using Docker driver with root privileges
	I1002 20:22:58.883464   85408 cni.go:84] Creating CNI manager for ""
	I1002 20:22:58.883542   85408 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:22:58.883560   85408 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:22:58.883630   85408 start.go:348] cluster config:
	{Name:addons-436069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-436069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1002 20:22:58.885023   85408 out.go:179] * Starting "addons-436069" primary control-plane node in "addons-436069" cluster
	I1002 20:22:58.886283   85408 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 20:22:58.887595   85408 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:22:58.888981   85408 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:22:58.889020   85408 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:22:58.889028   85408 cache.go:58] Caching tarball of preloaded images
	I1002 20:22:58.889023   85408 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:22:58.889116   85408 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:22:58.889127   85408 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:22:58.889483   85408 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/config.json ...
	I1002 20:22:58.889508   85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/config.json: {Name:mk39d759042797b89bb2baad365f87f5edd91ad6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:22:58.904981   85408 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1002 20:22:58.905152   85408 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1002 20:22:58.905174   85408 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1002 20:22:58.905180   85408 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1002 20:22:58.905193   85408 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1002 20:22:58.905201   85408 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1002 20:23:11.272069   85408 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1002 20:23:11.272109   85408 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:23:11.272142   85408 start.go:360] acquireMachinesLock for addons-436069: {Name:mkc1c80a9dbdd8675adf7a837ad4b78f6dc0cbce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:23:11.272253   85408 start.go:364] duration metric: took 89.67µs to acquireMachinesLock for "addons-436069"
	I1002 20:23:11.272280   85408 start.go:93] Provisioning new machine with config: &{Name:addons-436069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-436069 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:23:11.272359   85408 start.go:125] createHost starting for "" (driver="docker")
	I1002 20:23:11.274246   85408 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1002 20:23:11.274530   85408 start.go:159] libmachine.API.Create for "addons-436069" (driver="docker")
	I1002 20:23:11.274573   85408 client.go:168] LocalClient.Create starting
	I1002 20:23:11.274689   85408 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 20:23:11.556590   85408 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 20:23:11.597341   85408 cli_runner.go:164] Run: docker network inspect addons-436069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:23:11.614466   85408 cli_runner.go:211] docker network inspect addons-436069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:23:11.614529   85408 network_create.go:284] running [docker network inspect addons-436069] to gather additional debugging logs...
	I1002 20:23:11.614548   85408 cli_runner.go:164] Run: docker network inspect addons-436069
	W1002 20:23:11.630619   85408 cli_runner.go:211] docker network inspect addons-436069 returned with exit code 1
	I1002 20:23:11.630648   85408 network_create.go:287] error running [docker network inspect addons-436069]: docker network inspect addons-436069: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-436069 not found
	I1002 20:23:11.630668   85408 network_create.go:289] output of [docker network inspect addons-436069]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-436069 not found
	
	** /stderr **
	I1002 20:23:11.630831   85408 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:23:11.647916   85408 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002100250}
	I1002 20:23:11.647963   85408 network_create.go:124] attempt to create docker network addons-436069 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:23:11.648026   85408 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-436069 addons-436069
	I1002 20:23:11.707394   85408 network_create.go:108] docker network addons-436069 192.168.49.0/24 created
	I1002 20:23:11.707423   85408 kic.go:121] calculated static IP "192.168.49.2" for the "addons-436069" container
	I1002 20:23:11.707496   85408 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:23:11.724899   85408 cli_runner.go:164] Run: docker volume create addons-436069 --label name.minikube.sigs.k8s.io=addons-436069 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:23:11.742535   85408 oci.go:103] Successfully created a docker volume addons-436069
	I1002 20:23:11.742630   85408 cli_runner.go:164] Run: docker run --rm --name addons-436069-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-436069 --entrypoint /usr/bin/test -v addons-436069:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:23:17.296454   85408 cli_runner.go:217] Completed: docker run --rm --name addons-436069-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-436069 --entrypoint /usr/bin/test -v addons-436069:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (5.553783939s)
	I1002 20:23:17.296487   85408 oci.go:107] Successfully prepared a docker volume addons-436069
	I1002 20:23:17.296518   85408 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:23:17.296538   85408 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:23:17.296615   85408 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-436069:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:23:21.673521   85408 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-436069:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.376847102s)
	I1002 20:23:21.673561   85408 kic.go:203] duration metric: took 4.377018781s to extract preloaded images to volume ...
	W1002 20:23:21.673657   85408 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:23:21.673708   85408 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:23:21.673775   85408 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:23:21.727782   85408 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-436069 --name addons-436069 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-436069 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-436069 --network addons-436069 --ip 192.168.49.2 --volume addons-436069:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:23:22.013927   85408 cli_runner.go:164] Run: docker container inspect addons-436069 --format={{.State.Running}}
	I1002 20:23:22.032780   85408 cli_runner.go:164] Run: docker container inspect addons-436069 --format={{.State.Status}}
	I1002 20:23:22.049946   85408 cli_runner.go:164] Run: docker exec addons-436069 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:23:22.101205   85408 oci.go:144] the created container "addons-436069" has a running status.
	I1002 20:23:22.101238   85408 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa...
	I1002 20:23:22.435698   85408 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:23:22.461661   85408 cli_runner.go:164] Run: docker container inspect addons-436069 --format={{.State.Status}}
	I1002 20:23:22.480195   85408 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:23:22.480232   85408 kic_runner.go:114] Args: [docker exec --privileged addons-436069 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:23:22.524523   85408 cli_runner.go:164] Run: docker container inspect addons-436069 --format={{.State.Status}}
	I1002 20:23:22.542631   85408 machine.go:93] provisionDockerMachine start ...
	I1002 20:23:22.542773   85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
	I1002 20:23:22.560346   85408 main.go:141] libmachine: Using SSH client type: native
	I1002 20:23:22.560659   85408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 20:23:22.560678   85408 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:23:22.705732   85408 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-436069
	
	I1002 20:23:22.705776   85408 ubuntu.go:182] provisioning hostname "addons-436069"
	I1002 20:23:22.705839   85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
	I1002 20:23:22.724077   85408 main.go:141] libmachine: Using SSH client type: native
	I1002 20:23:22.724303   85408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 20:23:22.724317   85408 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-436069 && echo "addons-436069" | sudo tee /etc/hostname
	I1002 20:23:22.876077   85408 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-436069
	
	I1002 20:23:22.876192   85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
	I1002 20:23:22.893376   85408 main.go:141] libmachine: Using SSH client type: native
	I1002 20:23:22.893583   85408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 20:23:22.893599   85408 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-436069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-436069/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-436069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:23:23.036537   85408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:23:23.036568   85408 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 20:23:23.036610   85408 ubuntu.go:190] setting up certificates
	I1002 20:23:23.036624   85408 provision.go:84] configureAuth start
	I1002 20:23:23.036678   85408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-436069
	I1002 20:23:23.054068   85408 provision.go:143] copyHostCerts
	I1002 20:23:23.054147   85408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 20:23:23.054264   85408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 20:23:23.054333   85408 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 20:23:23.054386   85408 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.addons-436069 san=[127.0.0.1 192.168.49.2 addons-436069 localhost minikube]
	I1002 20:23:23.161577   85408 provision.go:177] copyRemoteCerts
	I1002 20:23:23.161637   85408 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:23:23.161692   85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
	I1002 20:23:23.178947   85408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa Username:docker}
	I1002 20:23:23.281158   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 20:23:23.300413   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 20:23:23.318382   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:23:23.335806   85408 provision.go:87] duration metric: took 299.164062ms to configureAuth
	I1002 20:23:23.335838   85408 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:23:23.336072   85408 config.go:182] Loaded profile config "addons-436069": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:23:23.336218   85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
	I1002 20:23:23.354665   85408 main.go:141] libmachine: Using SSH client type: native
	I1002 20:23:23.354899   85408 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1002 20:23:23.354918   85408 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:23:23.608355   85408 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:23:23.608382   85408 machine.go:96] duration metric: took 1.065718242s to provisionDockerMachine
	I1002 20:23:23.608395   85408 client.go:171] duration metric: took 12.333810073s to LocalClient.Create
	I1002 20:23:23.608420   85408 start.go:167] duration metric: took 12.333890414s to libmachine.API.Create "addons-436069"
	I1002 20:23:23.608429   85408 start.go:293] postStartSetup for "addons-436069" (driver="docker")
	I1002 20:23:23.608442   85408 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:23:23.608511   85408 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:23:23.608586   85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
	I1002 20:23:23.625979   85408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa Username:docker}
	I1002 20:23:23.729771   85408 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:23:23.733425   85408 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:23:23.733453   85408 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:23:23.733465   85408 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 20:23:23.733527   85408 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 20:23:23.733550   85408 start.go:296] duration metric: took 125.115167ms for postStartSetup
	I1002 20:23:23.733855   85408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-436069
	I1002 20:23:23.750954   85408 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/config.json ...
	I1002 20:23:23.751262   85408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:23:23.751306   85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
	I1002 20:23:23.768203   85408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa Username:docker}
	I1002 20:23:23.866973   85408 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:23:23.871193   85408 start.go:128] duration metric: took 12.598818239s to createHost
	I1002 20:23:23.871221   85408 start.go:83] releasing machines lock for "addons-436069", held for 12.598953112s
	I1002 20:23:23.871287   85408 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-436069
	I1002 20:23:23.888209   85408 ssh_runner.go:195] Run: cat /version.json
	I1002 20:23:23.888261   85408 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:23:23.888268   85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
	I1002 20:23:23.888313   85408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-436069
	I1002 20:23:23.906522   85408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa Username:docker}
	I1002 20:23:23.908205   85408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/addons-436069/id_rsa Username:docker}
	I1002 20:23:24.074363   85408 ssh_runner.go:195] Run: systemctl --version
	I1002 20:23:24.081162   85408 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:23:24.114923   85408 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:23:24.119623   85408 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:23:24.119680   85408 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:23:24.145084   85408 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:23:24.145112   85408 start.go:495] detecting cgroup driver to use...
	I1002 20:23:24.145141   85408 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:23:24.145182   85408 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:23:24.160550   85408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:23:24.172014   85408 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:23:24.172060   85408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:23:24.187602   85408 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:23:24.205911   85408 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:23:24.284295   85408 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:23:24.371200   85408 docker.go:234] disabling docker service ...
	I1002 20:23:24.371277   85408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:23:24.390275   85408 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:23:24.403276   85408 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:23:24.483636   85408 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:23:24.563979   85408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:23:24.575865   85408 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:23:24.589545   85408 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:23:24.589605   85408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:23:24.599592   85408 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:23:24.599651   85408 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:23:24.608095   85408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:23:24.617139   85408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:23:24.625989   85408 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:23:24.633987   85408 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:23:24.642324   85408 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:23:24.655053   85408 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:23:24.663473   85408 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:23:24.670697   85408 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1002 20:23:24.670838   85408 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1002 20:23:24.683363   85408 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:23:24.690858   85408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:23:24.772848   85408 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:23:24.871021   85408 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:23:24.871110   85408 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:23:24.875176   85408 start.go:563] Will wait 60s for crictl version
	I1002 20:23:24.875242   85408 ssh_runner.go:195] Run: which crictl
	I1002 20:23:24.878893   85408 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:23:24.902718   85408 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:23:24.902826   85408 ssh_runner.go:195] Run: crio --version
	I1002 20:23:24.929536   85408 ssh_runner.go:195] Run: crio --version
	I1002 20:23:24.958032   85408 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:23:24.959113   85408 cli_runner.go:164] Run: docker network inspect addons-436069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:23:24.975765   85408 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:23:24.980097   85408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:23:24.990391   85408 kubeadm.go:883] updating cluster {Name:addons-436069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-436069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:23:24.990527   85408 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:23:24.990580   85408 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:23:25.024438   85408 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:23:25.024481   85408 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:23:25.024539   85408 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:23:25.049104   85408 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:23:25.049125   85408 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:23:25.049133   85408 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 20:23:25.049210   85408 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-436069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-436069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:23:25.049266   85408 ssh_runner.go:195] Run: crio config
	I1002 20:23:25.094609   85408 cni.go:84] Creating CNI manager for ""
	I1002 20:23:25.094640   85408 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:23:25.094661   85408 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:23:25.094681   85408 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-436069 NodeName:addons-436069 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:23:25.094835   85408 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-436069"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:23:25.094903   85408 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:23:25.102927   85408 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:23:25.103000   85408 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:23:25.110274   85408 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1002 20:23:25.122287   85408 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:23:25.137390   85408 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1002 20:23:25.149451   85408 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:23:25.153030   85408 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:23:25.162415   85408 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:23:25.240043   85408 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:23:25.271306   85408 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069 for IP: 192.168.49.2
	I1002 20:23:25.271331   85408 certs.go:195] generating shared ca certs ...
	I1002 20:23:25.271352   85408 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:23:25.271502   85408 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 20:23:25.420752   85408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt ...
	I1002 20:23:25.420782   85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt: {Name:mkc601f6be1d2302a94e692bc2d9ae2acda9800b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:23:25.420967   85408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key ...
	I1002 20:23:25.420979   85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key: {Name:mkef1bbc5960baece2e5e5207bc7cd1f9d83225b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:23:25.421057   85408 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 20:23:25.734778   85408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt ...
	I1002 20:23:25.734807   85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt: {Name:mk909181c3a57ff65c6125df90f7a6ad13c2c87a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:23:25.734977   85408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key ...
	I1002 20:23:25.734989   85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key: {Name:mk54dbf10beaad6229e3a5278806b34b0e358f50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:23:25.735071   85408 certs.go:257] generating profile certs ...
	I1002 20:23:25.735126   85408 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.key
	I1002 20:23:25.735140   85408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.crt with IP's: []
	I1002 20:23:25.758012   85408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.crt ...
	I1002 20:23:25.758032   85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.crt: {Name:mk5cfbb52b8d031396930e7bff64e6ce2c5aecc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:23:25.758166   85408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.key ...
	I1002 20:23:25.758176   85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/client.key: {Name:mk7a3d8b24057fb4566bd07837c73eb7ac234a73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:23:25.758247   85408 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key.85a3edf8
	I1002 20:23:25.758265   85408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt.85a3edf8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:23:25.812050   85408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt.85a3edf8 ...
	I1002 20:23:25.812075   85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt.85a3edf8: {Name:mke748ae572d29dfd254bc63419d11b8950b520c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:23:25.812228   85408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key.85a3edf8 ...
	I1002 20:23:25.812240   85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key.85a3edf8: {Name:mk7ff83bea87979549e28158b8cc4d11ae273add Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:23:25.812313   85408 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt.85a3edf8 -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt
	I1002 20:23:25.812394   85408 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key.85a3edf8 -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key
	I1002 20:23:25.812446   85408 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.key
	I1002 20:23:25.812465   85408 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.crt with IP's: []
	I1002 20:23:26.091720   85408 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.crt ...
	I1002 20:23:26.091765   85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.crt: {Name:mk0b73dd9fcbf5d26004a2ec947a847ce4340df3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:23:26.091935   85408 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.key ...
	I1002 20:23:26.091947   85408 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.key: {Name:mk64f2dd7a04d07bb42e524cc4136dbc291fde1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:23:26.092121   85408 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:23:26.092166   85408 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:23:26.092190   85408 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:23:26.092211   85408 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 20:23:26.092797   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:23:26.111130   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:23:26.128377   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:23:26.145383   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:23:26.163536   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 20:23:26.182489   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 20:23:26.199978   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:23:26.217140   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/addons-436069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 20:23:26.233794   85408 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:23:26.253282   85408 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:23:26.265620   85408 ssh_runner.go:195] Run: openssl version
	I1002 20:23:26.271633   85408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:23:26.282710   85408 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:23:26.286383   85408 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:23:26.286439   85408 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:23:26.319830   85408 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:23:26.328433   85408 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:23:26.332002   85408 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:23:26.332082   85408 kubeadm.go:400] StartCluster: {Name:addons-436069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-436069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:23:26.332162   85408 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:23:26.332204   85408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:23:26.359494   85408 cri.go:89] found id: ""
	I1002 20:23:26.359578   85408 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:23:26.367639   85408 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:23:26.375646   85408 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:23:26.375697   85408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:23:26.383527   85408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:23:26.383550   85408 kubeadm.go:157] found existing configuration files:
	
	I1002 20:23:26.383592   85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:23:26.390960   85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:23:26.391023   85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:23:26.398055   85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:23:26.405339   85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:23:26.405398   85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:23:26.412346   85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:23:26.419701   85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:23:26.419776   85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:23:26.426922   85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:23:26.434164   85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:23:26.434238   85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:23:26.441701   85408 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:23:26.498191   85408 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:23:26.553837   85408 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:27:30.786627   85408 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:27:30.786779   85408 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:27:30.789580   85408 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:27:30.789700   85408 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:27:30.789858   85408 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:27:30.789956   85408 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:27:30.790033   85408 kubeadm.go:318] OS: Linux
	I1002 20:27:30.790109   85408 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:27:30.790178   85408 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:27:30.790258   85408 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:27:30.790343   85408 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:27:30.790391   85408 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:27:30.790441   85408 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:27:30.790483   85408 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:27:30.790523   85408 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:27:30.790591   85408 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:27:30.790673   85408 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:27:30.790880   85408 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:27:30.790999   85408 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:27:30.794072   85408 out.go:252]   - Generating certificates and keys ...
	I1002 20:27:30.794194   85408 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:27:30.794306   85408 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:27:30.794367   85408 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:27:30.794417   85408 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:27:30.794471   85408 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:27:30.794540   85408 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:27:30.794616   85408 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:27:30.794848   85408 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:27:30.794952   85408 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:27:30.795105   85408 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:27:30.795171   85408 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:27:30.795225   85408 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:27:30.795263   85408 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:27:30.795315   85408 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:27:30.795373   85408 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:27:30.795431   85408 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:27:30.795487   85408 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:27:30.795546   85408 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:27:30.795609   85408 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:27:30.795676   85408 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:27:30.795773   85408 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:27:30.797680   85408 out.go:252]   - Booting up control plane ...
	I1002 20:27:30.797793   85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:27:30.797879   85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:27:30.797942   85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:27:30.798024   85408 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:27:30.798097   85408 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:27:30.798178   85408 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:27:30.798269   85408 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:27:30.798326   85408 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:27:30.798444   85408 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:27:30.798536   85408 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:27:30.798602   85408 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.887694ms
	I1002 20:27:30.798680   85408 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:27:30.798784   85408 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:27:30.798878   85408 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:27:30.798950   85408 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:27:30.799011   85408 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000591692s
	I1002 20:27:30.799082   85408 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000805559s
	I1002 20:27:30.799139   85408 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000903266s
	I1002 20:27:30.799145   85408 kubeadm.go:318] 
	I1002 20:27:30.799221   85408 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:27:30.799291   85408 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:27:30.799364   85408 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:27:30.799446   85408 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:27:30.799526   85408 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:27:30.799599   85408 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:27:30.799628   85408 kubeadm.go:318] 
	W1002 20:27:30.799823   85408 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.887694ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000591692s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000805559s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000903266s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [addons-436069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.887694ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000591692s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000805559s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000903266s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:27:30.799913   85408 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:27:31.249692   85408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:27:31.262359   85408 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:27:31.262411   85408 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:27:31.270431   85408 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:27:31.270451   85408 kubeadm.go:157] found existing configuration files:
	
	I1002 20:27:31.270513   85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 20:27:31.278494   85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:27:31.278561   85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:27:31.285991   85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 20:27:31.293609   85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:27:31.293660   85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:27:31.301370   85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 20:27:31.309321   85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:27:31.309396   85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:27:31.317135   85408 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 20:27:31.324959   85408 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:27:31.325015   85408 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:27:31.332591   85408 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:27:31.367560   85408 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:27:31.367642   85408 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:27:31.388019   85408 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:27:31.388130   85408 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:27:31.388175   85408 kubeadm.go:318] OS: Linux
	I1002 20:27:31.388275   85408 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:27:31.388370   85408 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:27:31.388438   85408 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:27:31.388516   85408 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:27:31.388583   85408 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:27:31.388671   85408 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:27:31.388782   85408 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:27:31.388876   85408 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:27:31.444609   85408 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:27:31.444795   85408 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:27:31.444986   85408 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:27:31.452273   85408 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:27:31.456338   85408 out.go:252]   - Generating certificates and keys ...
	I1002 20:27:31.456445   85408 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:27:31.456533   85408 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:27:31.456651   85408 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:27:31.456758   85408 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:27:31.456863   85408 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:27:31.456948   85408 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:27:31.457040   85408 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:27:31.457133   85408 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:27:31.457227   85408 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:27:31.457341   85408 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:27:31.457381   85408 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:27:31.457440   85408 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:27:31.672954   85408 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:27:32.025360   85408 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:27:32.159044   85408 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:27:32.278275   85408 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:27:32.381591   85408 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:27:32.382085   85408 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:27:32.384393   85408 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:27:32.387571   85408 out.go:252]   - Booting up control plane ...
	I1002 20:27:32.387712   85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:27:32.387806   85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:27:32.387893   85408 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:27:32.400141   85408 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:27:32.400245   85408 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:27:32.406610   85408 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:27:32.407056   85408 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:27:32.407273   85408 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:27:32.506375   85408 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:27:32.506555   85408 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:27:33.008296   85408 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.947449ms
	I1002 20:27:33.011130   85408 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:27:33.011241   85408 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 20:27:33.011320   85408 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:27:33.011387   85408 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:31:33.011476   85408 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
	I1002 20:31:33.011684   85408 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
	I1002 20:31:33.011831   85408 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
	I1002 20:31:33.011842   85408 kubeadm.go:318] 
	I1002 20:31:33.011975   85408 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:31:33.012102   85408 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:31:33.012245   85408 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:31:33.012388   85408 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:31:33.012490   85408 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:31:33.012639   85408 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:31:33.012652   85408 kubeadm.go:318] 
	I1002 20:31:33.015272   85408 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:31:33.015445   85408 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:31:33.016147   85408 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
	I1002 20:31:33.016208   85408 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:31:33.016314   85408 kubeadm.go:402] duration metric: took 8m6.684244277s to StartCluster
	I1002 20:31:33.016377   85408 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:31:33.016435   85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:31:33.041911   85408 cri.go:89] found id: ""
	I1002 20:31:33.041945   85408 logs.go:282] 0 containers: []
	W1002 20:31:33.041953   85408 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:31:33.041959   85408 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:31:33.042007   85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:31:33.070403   85408 cri.go:89] found id: ""
	I1002 20:31:33.070435   85408 logs.go:282] 0 containers: []
	W1002 20:31:33.070447   85408 logs.go:284] No container was found matching "etcd"
	I1002 20:31:33.070458   85408 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:31:33.070523   85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:31:33.097185   85408 cri.go:89] found id: ""
	I1002 20:31:33.097213   85408 logs.go:282] 0 containers: []
	W1002 20:31:33.097221   85408 logs.go:284] No container was found matching "coredns"
	I1002 20:31:33.097234   85408 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:31:33.097299   85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:31:33.123097   85408 cri.go:89] found id: ""
	I1002 20:31:33.123123   85408 logs.go:282] 0 containers: []
	W1002 20:31:33.123132   85408 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:31:33.123139   85408 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:31:33.123187   85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:31:33.149186   85408 cri.go:89] found id: ""
	I1002 20:31:33.149209   85408 logs.go:282] 0 containers: []
	W1002 20:31:33.149217   85408 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:31:33.149222   85408 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:31:33.149271   85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:31:33.173539   85408 cri.go:89] found id: ""
	I1002 20:31:33.173566   85408 logs.go:282] 0 containers: []
	W1002 20:31:33.173575   85408 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:31:33.173581   85408 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:31:33.173628   85408 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:31:33.199446   85408 cri.go:89] found id: ""
	I1002 20:31:33.199474   85408 logs.go:282] 0 containers: []
	W1002 20:31:33.199485   85408 logs.go:284] No container was found matching "kindnet"
	I1002 20:31:33.199498   85408 logs.go:123] Gathering logs for kubelet ...
	I1002 20:31:33.199514   85408 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:31:33.266874   85408 logs.go:123] Gathering logs for dmesg ...
	I1002 20:31:33.266919   85408 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:31:33.281732   85408 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:31:33.281785   85408 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:31:33.340504   85408 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:31:33.331835    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:31:33.332360    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:31:33.333937    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:31:33.334433    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:31:33.336143    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:31:33.331835    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:31:33.332360    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:31:33.333937    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:31:33.334433    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 20:31:33.336143    2369 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:31:33.340540   85408 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:31:33.340555   85408 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:31:33.403016   85408 logs.go:123] Gathering logs for container status ...
	I1002 20:31:33.403058   85408 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1002 20:31:33.431521   85408 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.947449ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:31:33.431601   85408 out.go:285] * 
	* 
	W1002 20:31:33.431669   85408 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.947449ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.947449ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:31:33.431682   85408 out.go:285] * 
	* 
	W1002 20:31:33.433538   85408 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:31:33.437657   85408 out.go:203] 
	W1002 20:31:33.439276   85408 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.947449ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.947449ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000054721s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000034465s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000172235s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:31:33.439300   85408 out.go:285] * 
	* 
	I1002 20:31:33.441966   85408 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:110: out/minikube-linux-amd64 start -p addons-436069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (514.80s)

                                                
                                    
x
+
TestErrorSpam/setup (500.5s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-461767 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-461767 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p nospam-461767 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-461767 --driver=docker  --container-runtime=crio: exit status 80 (8m20.487459434s)

                                                
                                                
-- stdout --
	* [nospam-461767] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "nospam-461767" primary control-plane node in "nospam-461767" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost nospam-461767] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-461767] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001129005s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000963923s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001087023s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001123018s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002090453s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000220939s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000295059s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000423669s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.002090453s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000220939s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000295059s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000423669s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-linux-amd64 start -p nospam-461767 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-461767 --driver=docker  --container-runtime=crio" failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-kubelet-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"front-proxy-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/ca\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/server\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/server serving cert is signed for DNS names [localhost nospam-461767] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/peer\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-461767] and IPs [192.168.49.2 127.0.0.1 ::1]"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"etcd/healthcheck-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"apiserver-etcd-client\" certificate and key"
error_spam_test.go:96: unexpected stderr: "[certs] Generating \"sa\" key and public key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.001129005s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000963923s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.001087023s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.001123018s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "X Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.002090453s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000220939s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000295059s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000423669s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "[init] Using Kubernetes version: v1.34.1"
error_spam_test.go:96: unexpected stderr: "[preflight] Running pre-flight checks"
error_spam_test.go:96: unexpected stderr: "[preflight] The system verification failed. Printing the output from the verification:"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mKERNEL_VERSION\x1b[0m: \x1b[0;32m6.8.0-1041-gcp\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mOS\x1b[0m: \x1b[0;32mLinux\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPU\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_CPUSET\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_DEVICES\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_FREEZER\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_MEMORY\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_PIDS\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_HUGETLB\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "\x1b[0;37mCGROUPS_IO\x1b[0m: \x1b[0;32menabled\x1b[0m"
error_spam_test.go:96: unexpected stderr: "[preflight] Pulling images required for setting up a Kubernetes cluster"
error_spam_test.go:96: unexpected stderr: "[preflight] This might take a minute or two, depending on the speed of your internet connection"
error_spam_test.go:96: unexpected stderr: "[preflight] You can also perform this action beforehand using 'kubeadm config images pull'"
error_spam_test.go:96: unexpected stderr: "[certs] Using certificateDir folder \"/var/lib/minikube/certs\""
error_spam_test.go:96: unexpected stderr: "[certs] Using existing ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-kubelet-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing front-proxy-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/ca certificate authority"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/server certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/peer certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing etcd/healthcheck-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using existing apiserver-etcd-client certificate and key on disk"
error_spam_test.go:96: unexpected stderr: "[certs] Using the existing \"sa\" key"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\""
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"super-admin.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"kubelet.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[kubeconfig] Writing \"scheduler.conf\" kubeconfig file"
error_spam_test.go:96: unexpected stderr: "[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Using manifest folder \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-apiserver\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-controller-manager\""
error_spam_test.go:96: unexpected stderr: "[control-plane] Creating static Pod manifest for \"kube-scheduler\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\""
error_spam_test.go:96: unexpected stderr: "[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\""
error_spam_test.go:96: unexpected stderr: "[kubelet-start] Starting the kubelet"
error_spam_test.go:96: unexpected stderr: "[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\""
error_spam_test.go:96: unexpected stderr: "[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[kubelet-check] The kubelet is healthy after 1.002090453s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-apiserver is not healthy after 4m0.000220939s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-scheduler is not healthy after 4m0.000295059s"
error_spam_test.go:96: unexpected stderr: "[control-plane-check] kube-controller-manager is not healthy after 4m0.000423669s"
error_spam_test.go:96: unexpected stderr: "A control plane component may have crashed or exited when started by the container runtime."
error_spam_test.go:96: unexpected stderr: "To troubleshoot, list all containers using your preferred container runtimes CLI."
error_spam_test.go:96: unexpected stderr: "Here is one example how you may list all running Kubernetes containers by using crictl:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'"
error_spam_test.go:96: unexpected stderr: "\tOnce you have found the failing container, you can inspect its logs with:"
error_spam_test.go:96: unexpected stderr: "\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1"
error_spam_test.go:96: unexpected stderr: "\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'"
error_spam_test.go:96: unexpected stderr: "error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]"
error_spam_test.go:96: unexpected stderr: "To see the stack trace of this error execute with --v=5 or higher"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:110: minikube stdout:
* [nospam-461767] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
- MINIKUBE_LOCATION=21682
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker driver with root privileges
* Starting "nospam-461767" primary control-plane node in "nospam-461767" cluster
* Pulling base image v0.0.48-1759382731-21643 ...
* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nospam-461767] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nospam-461767] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.001129005s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-controller-manager is not healthy after 4m0.000963923s
[control-plane-check] kube-scheduler is not healthy after 4m0.001087023s
[control-plane-check] kube-apiserver is not healthy after 4m0.001123018s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.002090453s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000220939s
[control-plane-check] kube-scheduler is not healthy after 4m0.000295059s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000423669s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.34.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 6.8.0-1041-gcp
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_IO: enabled
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.002090453s
[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
[control-plane-check] kube-apiserver is not healthy after 4m0.000220939s
[control-plane-check] kube-scheduler is not healthy after 4m0.000295059s
[control-plane-check] kube-controller-manager is not healthy after 4m0.000423669s

                                                
                                                
A control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'

                                                
                                                

                                                
                                                
stderr:
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.49.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
To see the stack trace of this error execute with --v=5 or higher

                                                
                                                
* 
--- FAIL: TestErrorSpam/setup (500.50s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (498.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012915 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-012915 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: exit status 80 (8m17.003679708s)

                                                
                                                
-- stdout --
	* [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Found network options:
	  - HTTP_PROXY=localhost:44325
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:44325 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-012915 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-012915 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00104398s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.0004673s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000600827s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528115s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.038953ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000716213s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000726752s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000813219s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.038953ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000716213s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000726752s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000813219s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-amd64 start -p functional-012915 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 6 (293.576295ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:48:23.784798  103002 status.go:458] kubeconfig endpoint: get endpoint: "functional-012915" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-887627                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-887627   │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ delete  │ -p download-only-072312                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-072312   │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ start   │ --download-only -p download-docker-272222 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-272222 │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ delete  │ -p download-docker-272222                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-272222 │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ start   │ --download-only -p binary-mirror-809560 --alsologtostderr --binary-mirror http://127.0.0.1:39541 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-809560   │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ delete  │ -p binary-mirror-809560                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-809560   │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ disable dashboard -p addons-436069                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ addons  │ enable dashboard -p addons-436069                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ start   │ -p addons-436069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ delete  │ -p addons-436069                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ start   │ -p nospam-461767 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-461767 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start   │ nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ start   │ nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ start   │ nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ pause   │ nospam-461767 --log_dir /tmp/nospam-461767 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ pause   │ nospam-461767 --log_dir /tmp/nospam-461767 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ pause   │ nospam-461767 --log_dir /tmp/nospam-461767 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ delete  │ -p nospam-461767                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ start   │ -p functional-012915 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-012915      │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:40:06
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:40:06.518450   98038 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:40:06.518560   98038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:40:06.518564   98038 out.go:374] Setting ErrFile to fd 2...
	I1002 20:40:06.518567   98038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:40:06.518783   98038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:40:06.519251   98038 out.go:368] Setting JSON to false
	I1002 20:40:06.520099   98038 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8547,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:40:06.520178   98038 start.go:140] virtualization: kvm guest
	I1002 20:40:06.522858   98038 out.go:179] * [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:40:06.524151   98038 notify.go:220] Checking for updates...
	I1002 20:40:06.524197   98038 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:40:06.525582   98038 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:40:06.526904   98038 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:40:06.527980   98038 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:40:06.529458   98038 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:40:06.530714   98038 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:40:06.532508   98038 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:40:06.556345   98038 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:40:06.556471   98038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:40:06.609793   98038 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:40:06.600289836 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:40:06.609885   98038 docker.go:318] overlay module found
	I1002 20:40:06.611718   98038 out.go:179] * Using the docker driver based on user configuration
	I1002 20:40:06.613046   98038 start.go:304] selected driver: docker
	I1002 20:40:06.613055   98038 start.go:924] validating driver "docker" against <nil>
	I1002 20:40:06.613065   98038 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:40:06.613631   98038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:40:06.671225   98038 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 20:40:06.660859743 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:40:06.671472   98038 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:40:06.671761   98038 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:40:06.673626   98038 out.go:179] * Using Docker driver with root privileges
	I1002 20:40:06.674764   98038 cni.go:84] Creating CNI manager for ""
	I1002 20:40:06.674817   98038 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:40:06.674823   98038 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 20:40:06.674886   98038 start.go:348] cluster config:
	{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:40:06.676233   98038 out.go:179] * Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	I1002 20:40:06.677418   98038 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 20:40:06.678481   98038 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:40:06.679452   98038 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:40:06.679480   98038 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:40:06.679494   98038 cache.go:58] Caching tarball of preloaded images
	I1002 20:40:06.679541   98038 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:40:06.679587   98038 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:40:06.679594   98038 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:40:06.679951   98038 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json ...
	I1002 20:40:06.679968   98038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json: {Name:mkeea48b9604c1e78ae75774d9940b77acff12e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:40:06.700473   98038 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:40:06.700483   98038 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:40:06.700499   98038 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:40:06.700527   98038 start.go:360] acquireMachinesLock for functional-012915: {Name:mk05b0465db6f8234fcb55c21a78a37886923b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:40:06.700622   98038 start.go:364] duration metric: took 82.44µs to acquireMachinesLock for "functional-012915"
	I1002 20:40:06.700639   98038 start.go:93] Provisioning new machine with config: &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:40:06.700694   98038 start.go:125] createHost starting for "" (driver="docker")
	I1002 20:40:06.702704   98038 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1002 20:40:06.702949   98038 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:44325 to docker env.
	I1002 20:40:06.702969   98038 start.go:159] libmachine.API.Create for "functional-012915" (driver="docker")
	I1002 20:40:06.702989   98038 client.go:168] LocalClient.Create starting
	I1002 20:40:06.703074   98038 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 20:40:06.703098   98038 main.go:141] libmachine: Decoding PEM data...
	I1002 20:40:06.703110   98038 main.go:141] libmachine: Parsing certificate...
	I1002 20:40:06.703159   98038 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 20:40:06.703183   98038 main.go:141] libmachine: Decoding PEM data...
	I1002 20:40:06.703189   98038 main.go:141] libmachine: Parsing certificate...
	I1002 20:40:06.703512   98038 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 20:40:06.720067   98038 cli_runner.go:211] docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 20:40:06.720126   98038 network_create.go:284] running [docker network inspect functional-012915] to gather additional debugging logs...
	I1002 20:40:06.720141   98038 cli_runner.go:164] Run: docker network inspect functional-012915
	W1002 20:40:06.737431   98038 cli_runner.go:211] docker network inspect functional-012915 returned with exit code 1
	I1002 20:40:06.737449   98038 network_create.go:287] error running [docker network inspect functional-012915]: docker network inspect functional-012915: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-012915 not found
	I1002 20:40:06.737460   98038 network_create.go:289] output of [docker network inspect functional-012915]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-012915 not found
	
	** /stderr **
	I1002 20:40:06.737567   98038 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:40:06.754634   98038 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e468a0}
	I1002 20:40:06.754666   98038 network_create.go:124] attempt to create docker network functional-012915 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 20:40:06.754715   98038 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-012915 functional-012915
	I1002 20:40:06.811894   98038 network_create.go:108] docker network functional-012915 192.168.49.0/24 created
	I1002 20:40:06.811918   98038 kic.go:121] calculated static IP "192.168.49.2" for the "functional-012915" container
	I1002 20:40:06.811973   98038 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 20:40:06.828952   98038 cli_runner.go:164] Run: docker volume create functional-012915 --label name.minikube.sigs.k8s.io=functional-012915 --label created_by.minikube.sigs.k8s.io=true
	I1002 20:40:06.847126   98038 oci.go:103] Successfully created a docker volume functional-012915
	I1002 20:40:06.847200   98038 cli_runner.go:164] Run: docker run --rm --name functional-012915-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-012915 --entrypoint /usr/bin/test -v functional-012915:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 20:40:07.236877   98038 oci.go:107] Successfully prepared a docker volume functional-012915
	I1002 20:40:07.236948   98038 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:40:07.236977   98038 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 20:40:07.237028   98038 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-012915:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 20:40:11.600520   98038 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v functional-012915:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.363436485s)
	I1002 20:40:11.600545   98038 kic.go:203] duration metric: took 4.363571457s to extract preloaded images to volume ...
	W1002 20:40:11.600632   98038 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 20:40:11.600667   98038 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 20:40:11.600698   98038 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 20:40:11.653135   98038 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-012915 --name functional-012915 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-012915 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-012915 --network functional-012915 --ip 192.168.49.2 --volume functional-012915:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 20:40:11.919091   98038 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Running}}
	I1002 20:40:11.938298   98038 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:40:11.955856   98038 cli_runner.go:164] Run: docker exec functional-012915 stat /var/lib/dpkg/alternatives/iptables
	I1002 20:40:11.999096   98038 oci.go:144] the created container "functional-012915" has a running status.
	I1002 20:40:11.999118   98038 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa...
	I1002 20:40:12.422772   98038 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 20:40:12.449255   98038 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:40:12.467018   98038 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 20:40:12.467030   98038 kic_runner.go:114] Args: [docker exec --privileged functional-012915 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 20:40:12.508374   98038 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:40:12.525981   98038 machine.go:93] provisionDockerMachine start ...
	I1002 20:40:12.526079   98038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:40:12.544865   98038 main.go:141] libmachine: Using SSH client type: native
	I1002 20:40:12.545172   98038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:40:12.545182   98038 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:40:12.689769   98038 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:40:12.689799   98038 ubuntu.go:182] provisioning hostname "functional-012915"
	I1002 20:40:12.689864   98038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:40:12.708280   98038 main.go:141] libmachine: Using SSH client type: native
	I1002 20:40:12.708494   98038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:40:12.708502   98038 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-012915 && echo "functional-012915" | sudo tee /etc/hostname
	I1002 20:40:12.861787   98038 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:40:12.861857   98038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:40:12.880237   98038 main.go:141] libmachine: Using SSH client type: native
	I1002 20:40:12.880445   98038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:40:12.880458   98038 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-012915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-012915/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-012915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:40:13.024430   98038 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:40:13.024451   98038 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 20:40:13.024467   98038 ubuntu.go:190] setting up certificates
	I1002 20:40:13.024475   98038 provision.go:84] configureAuth start
	I1002 20:40:13.024524   98038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:40:13.041630   98038 provision.go:143] copyHostCerts
	I1002 20:40:13.041676   98038 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 20:40:13.041683   98038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:40:13.041770   98038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 20:40:13.041859   98038 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 20:40:13.041863   98038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:40:13.041890   98038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 20:40:13.041941   98038 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 20:40:13.041944   98038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:40:13.041969   98038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 20:40:13.042015   98038 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.functional-012915 san=[127.0.0.1 192.168.49.2 functional-012915 localhost minikube]
	I1002 20:40:13.100891   98038 provision.go:177] copyRemoteCerts
	I1002 20:40:13.100944   98038 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:40:13.100988   98038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:40:13.119170   98038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:40:13.221543   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:40:13.240946   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:40:13.258700   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:40:13.275648   98038 provision.go:87] duration metric: took 251.158308ms to configureAuth
	I1002 20:40:13.275668   98038 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:40:13.275877   98038 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:40:13.275984   98038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:40:13.293539   98038 main.go:141] libmachine: Using SSH client type: native
	I1002 20:40:13.293757   98038 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:40:13.293772   98038 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:40:13.548350   98038 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:40:13.548368   98038 machine.go:96] duration metric: took 1.022371498s to provisionDockerMachine
	I1002 20:40:13.548379   98038 client.go:171] duration metric: took 6.845385014s to LocalClient.Create
	I1002 20:40:13.548400   98038 start.go:167] duration metric: took 6.845430251s to libmachine.API.Create "functional-012915"
	I1002 20:40:13.548413   98038 start.go:293] postStartSetup for "functional-012915" (driver="docker")
	I1002 20:40:13.548424   98038 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:40:13.548474   98038 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:40:13.548515   98038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:40:13.565805   98038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:40:13.668664   98038 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:40:13.672125   98038 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:40:13.672141   98038 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:40:13.672155   98038 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 20:40:13.672247   98038 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 20:40:13.672325   98038 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 20:40:13.672392   98038 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> hosts in /etc/test/nested/copy/84100
	I1002 20:40:13.672424   98038 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/84100
	I1002 20:40:13.680088   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:40:13.700560   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts --> /etc/test/nested/copy/84100/hosts (40 bytes)
	I1002 20:40:13.718095   98038 start.go:296] duration metric: took 169.66821ms for postStartSetup
	I1002 20:40:13.718403   98038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:40:13.735589   98038 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json ...
	I1002 20:40:13.735864   98038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:40:13.735904   98038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:40:13.753716   98038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:40:13.852135   98038 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:40:13.856558   98038 start.go:128] duration metric: took 7.155837593s to createHost
	I1002 20:40:13.856588   98038 start.go:83] releasing machines lock for "functional-012915", held for 7.155958565s
	I1002 20:40:13.856657   98038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:40:13.876067   98038 out.go:179] * Found network options:
	I1002 20:40:13.877627   98038 out.go:179]   - HTTP_PROXY=localhost:44325
	W1002 20:40:13.878974   98038 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1002 20:40:13.880284   98038 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1002 20:40:13.881811   98038 ssh_runner.go:195] Run: cat /version.json
	I1002 20:40:13.881853   98038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:40:13.881867   98038 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:40:13.881929   98038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:40:13.901464   98038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:40:13.901733   98038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:40:14.053410   98038 ssh_runner.go:195] Run: systemctl --version
	I1002 20:40:14.060224   98038 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:40:14.094861   98038 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:40:14.099549   98038 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:40:14.099606   98038 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:40:14.125179   98038 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 20:40:14.125196   98038 start.go:495] detecting cgroup driver to use...
	I1002 20:40:14.125230   98038 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:40:14.125274   98038 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:40:14.141190   98038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:40:14.153747   98038 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:40:14.153795   98038 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:40:14.169967   98038 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:40:14.187084   98038 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:40:14.267169   98038 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:40:14.351865   98038 docker.go:234] disabling docker service ...
	I1002 20:40:14.351916   98038 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:40:14.370154   98038 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:40:14.383484   98038 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:40:14.466864   98038 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:40:14.547842   98038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:40:14.560077   98038 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:40:14.573447   98038 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:40:14.573499   98038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:40:14.583601   98038 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:40:14.583656   98038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:40:14.592097   98038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:40:14.600718   98038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:40:14.609334   98038 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:40:14.617290   98038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:40:14.626319   98038 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:40:14.639505   98038 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:40:14.647913   98038 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:40:14.655150   98038 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:40:14.662667   98038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:40:14.738790   98038 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:40:14.841350   98038 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:40:14.841417   98038 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:40:14.845296   98038 start.go:563] Will wait 60s for crictl version
	I1002 20:40:14.845339   98038 ssh_runner.go:195] Run: which crictl
	I1002 20:40:14.848696   98038 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:40:14.872469   98038 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:40:14.872536   98038 ssh_runner.go:195] Run: crio --version
	I1002 20:40:14.899254   98038 ssh_runner.go:195] Run: crio --version
	I1002 20:40:14.927937   98038 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:40:14.929016   98038 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:40:14.945916   98038 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:40:14.949881   98038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:40:14.959684   98038 kubeadm.go:883] updating cluster {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:40:14.959800   98038 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:40:14.959842   98038 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:40:14.990702   98038 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:40:14.990715   98038 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:40:14.990788   98038 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:40:15.016213   98038 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:40:15.016238   98038 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:40:15.016245   98038 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:40:15.016330   98038 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:40:15.016384   98038 ssh_runner.go:195] Run: crio config
	I1002 20:40:15.062699   98038 cni.go:84] Creating CNI manager for ""
	I1002 20:40:15.062714   98038 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:40:15.062730   98038 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:40:15.062766   98038 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-012915 NodeName:functional-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:40:15.062943   98038 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-012915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:40:15.063003   98038 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:40:15.071051   98038 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:40:15.071102   98038 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:40:15.078395   98038 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:40:15.090401   98038 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:40:15.105906   98038 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:40:15.118379   98038 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:40:15.121974   98038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 20:40:15.131942   98038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:40:15.213867   98038 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:40:15.238133   98038 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915 for IP: 192.168.49.2
	I1002 20:40:15.238146   98038 certs.go:195] generating shared ca certs ...
	I1002 20:40:15.238160   98038 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:40:15.238312   98038 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 20:40:15.238348   98038 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 20:40:15.238354   98038 certs.go:257] generating profile certs ...
	I1002 20:40:15.238403   98038 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key
	I1002 20:40:15.238418   98038 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt with IP's: []
	I1002 20:40:15.579785   98038 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt ...
	I1002 20:40:15.579801   98038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: {Name:mk15d72c1c732801cff8abe092cd16b79bf6fe7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:40:15.580032   98038 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key ...
	I1002 20:40:15.580042   98038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key: {Name:mk98deacbea52b57213103cb9b828bcf027b68c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:40:15.580161   98038 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645
	I1002 20:40:15.580173   98038 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt.b416a645 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 20:40:15.762534   98038 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt.b416a645 ...
	I1002 20:40:15.762571   98038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt.b416a645: {Name:mk1477919c8ffc84dee933044093d437da527d24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:40:15.762792   98038 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645 ...
	I1002 20:40:15.762810   98038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645: {Name:mkc3eeb917a2a638de368b3d721ea9bb8994d8c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:40:15.762923   98038 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt.b416a645 -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt
	I1002 20:40:15.763036   98038 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645 -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key
	I1002 20:40:15.763108   98038 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key
	I1002 20:40:15.763120   98038 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt with IP's: []
	I1002 20:40:15.896198   98038 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt ...
	I1002 20:40:15.896213   98038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt: {Name:mkf70883e799834117c4be0fc5c50cfa141e2e91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:40:15.896429   98038 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key ...
	I1002 20:40:15.896443   98038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key: {Name:mk48096559895fc28c1bb00a7a843685d371a3bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:40:15.896662   98038 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 20:40:15.896699   98038 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 20:40:15.896706   98038 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:40:15.896728   98038 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:40:15.896759   98038 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:40:15.896778   98038 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 20:40:15.896813   98038 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:40:15.897436   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:40:15.917327   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:40:15.937704   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:40:15.956532   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:40:15.973773   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:40:15.991608   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:40:16.009587   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:40:16.027512   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:40:16.044614   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:40:16.064097   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 20:40:16.081874   98038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 20:40:16.099516   98038 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:40:16.112129   98038 ssh_runner.go:195] Run: openssl version
	I1002 20:40:16.118155   98038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:40:16.126911   98038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:40:16.130837   98038 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:40:16.130895   98038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:40:16.165026   98038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:40:16.173592   98038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 20:40:16.182635   98038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 20:40:16.186535   98038 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:40:16.186581   98038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 20:40:16.220469   98038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 20:40:16.229075   98038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 20:40:16.237926   98038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 20:40:16.241626   98038 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:40:16.241675   98038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 20:40:16.276269   98038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:40:16.285317   98038 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:40:16.288963   98038 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 20:40:16.289012   98038 kubeadm.go:400] StartCluster: {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:40:16.289070   98038 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:40:16.289126   98038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:40:16.317800   98038 cri.go:89] found id: ""
	I1002 20:40:16.317876   98038 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:40:16.326276   98038 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:40:16.334362   98038 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:40:16.334416   98038 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:40:16.342250   98038 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:40:16.342265   98038 kubeadm.go:157] found existing configuration files:
	
	I1002 20:40:16.342307   98038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:40:16.349918   98038 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:40:16.349962   98038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:40:16.357145   98038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:40:16.364530   98038 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:40:16.364573   98038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:40:16.371816   98038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:40:16.379562   98038 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:40:16.379641   98038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:40:16.387128   98038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:40:16.394567   98038 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:40:16.394610   98038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:40:16.401613   98038 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:40:16.458339   98038 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:40:16.515788   98038 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:44:20.411026   98038 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 20:44:20.411246   98038 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:44:20.414062   98038 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:44:20.414193   98038 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:44:20.414403   98038 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:44:20.414531   98038 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:44:20.414599   98038 kubeadm.go:318] OS: Linux
	I1002 20:44:20.414660   98038 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:44:20.414726   98038 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:44:20.414809   98038 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:44:20.414895   98038 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:44:20.414947   98038 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:44:20.415006   98038 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:44:20.415055   98038 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:44:20.415114   98038 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:44:20.415208   98038 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:44:20.415300   98038 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:44:20.415423   98038 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:44:20.415475   98038 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:44:20.418019   98038 out.go:252]   - Generating certificates and keys ...
	I1002 20:44:20.418083   98038 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:44:20.418146   98038 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:44:20.418209   98038 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 20:44:20.418272   98038 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 20:44:20.418348   98038 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 20:44:20.418413   98038 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 20:44:20.418493   98038 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 20:44:20.418637   98038 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [functional-012915 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:44:20.418691   98038 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 20:44:20.418835   98038 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [functional-012915 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 20:44:20.418889   98038 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 20:44:20.418946   98038 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 20:44:20.418985   98038 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 20:44:20.419030   98038 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:44:20.419070   98038 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:44:20.419157   98038 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:44:20.419235   98038 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:44:20.419316   98038 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:44:20.419392   98038 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:44:20.419491   98038 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:44:20.419544   98038 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:44:20.421108   98038 out.go:252]   - Booting up control plane ...
	I1002 20:44:20.421183   98038 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:44:20.421269   98038 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:44:20.421333   98038 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:44:20.421423   98038 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:44:20.421500   98038 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:44:20.421587   98038 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:44:20.421661   98038 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:44:20.421692   98038 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:44:20.421847   98038 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:44:20.421963   98038 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:44:20.422012   98038 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00104398s
	I1002 20:44:20.422107   98038 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:44:20.422197   98038 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:44:20.422274   98038 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:44:20.422344   98038 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:44:20.422401   98038 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.0004673s
	I1002 20:44:20.422459   98038 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000600827s
	I1002 20:44:20.422524   98038 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000528115s
	I1002 20:44:20.422526   98038 kubeadm.go:318] 
	I1002 20:44:20.422603   98038 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:44:20.422669   98038 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:44:20.422755   98038 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:44:20.422852   98038 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:44:20.422937   98038 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:44:20.423050   98038 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:44:20.423105   98038 kubeadm.go:318] 
	W1002 20:44:20.423240   98038 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-012915 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-012915 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.00104398s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.0004673s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000600827s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000528115s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 20:44:20.423330   98038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:44:20.868434   98038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:44:20.881531   98038 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:44:20.881590   98038 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:44:20.890527   98038 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:44:20.890539   98038 kubeadm.go:157] found existing configuration files:
	
	I1002 20:44:20.890591   98038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:44:20.898306   98038 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:44:20.898372   98038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:44:20.905566   98038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:44:20.913071   98038 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:44:20.913112   98038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:44:20.920302   98038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:44:20.927397   98038 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:44:20.927436   98038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:44:20.934345   98038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:44:20.941447   98038 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:44:20.941497   98038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:44:20.948691   98038 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:44:20.984331   98038 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:44:20.984403   98038 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:44:21.004658   98038 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:44:21.004716   98038 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:44:21.004779   98038 kubeadm.go:318] OS: Linux
	I1002 20:44:21.004817   98038 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:44:21.004920   98038 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:44:21.004996   98038 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:44:21.005059   98038 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:44:21.005122   98038 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:44:21.005184   98038 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:44:21.005238   98038 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:44:21.005272   98038 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:44:21.064351   98038 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:44:21.064480   98038 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:44:21.064569   98038 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:44:21.071149   98038 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:44:21.074769   98038 out.go:252]   - Generating certificates and keys ...
	I1002 20:44:21.074835   98038 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:44:21.074922   98038 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:44:21.075005   98038 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:44:21.075092   98038 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:44:21.075189   98038 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:44:21.075259   98038 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:44:21.075349   98038 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:44:21.075426   98038 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:44:21.075523   98038 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:44:21.075625   98038 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:44:21.075674   98038 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:44:21.075758   98038 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:44:21.588985   98038 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:44:21.908769   98038 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:44:22.005173   98038 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:44:22.170684   98038 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:44:22.399276   98038 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:44:22.399768   98038 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:44:22.401993   98038 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:44:22.403845   98038 out.go:252]   - Booting up control plane ...
	I1002 20:44:22.403945   98038 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:44:22.404044   98038 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:44:22.404141   98038 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:44:22.417530   98038 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:44:22.417635   98038 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:44:22.424028   98038 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:44:22.425103   98038 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:44:22.425157   98038 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:44:22.522090   98038 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:44:22.522247   98038 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:44:23.023022   98038 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.038953ms
	I1002 20:44:23.025845   98038 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:44:23.025957   98038 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:44:23.026036   98038 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:44:23.026106   98038 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 20:48:23.027214   98038 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000716213s
	I1002 20:48:23.027593   98038 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000726752s
	I1002 20:48:23.027830   98038 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000813219s
	I1002 20:48:23.027857   98038 kubeadm.go:318] 
	I1002 20:48:23.028072   98038 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 20:48:23.028261   98038 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 20:48:23.028494   98038 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 20:48:23.028727   98038 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 20:48:23.028913   98038 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 20:48:23.029104   98038 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 20:48:23.029111   98038 kubeadm.go:318] 
	I1002 20:48:23.030869   98038 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 20:48:23.031034   98038 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 20:48:23.031549   98038 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 20:48:23.031619   98038 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 20:48:23.031709   98038 kubeadm.go:402] duration metric: took 8m6.742700188s to StartCluster
	I1002 20:48:23.031787   98038 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:48:23.031843   98038 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:48:23.060058   98038 cri.go:89] found id: ""
	I1002 20:48:23.060096   98038 logs.go:282] 0 containers: []
	W1002 20:48:23.060103   98038 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:48:23.060111   98038 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:48:23.060168   98038 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:48:23.088496   98038 cri.go:89] found id: ""
	I1002 20:48:23.088517   98038 logs.go:282] 0 containers: []
	W1002 20:48:23.088525   98038 logs.go:284] No container was found matching "etcd"
	I1002 20:48:23.088530   98038 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:48:23.088587   98038 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:48:23.117388   98038 cri.go:89] found id: ""
	I1002 20:48:23.117410   98038 logs.go:282] 0 containers: []
	W1002 20:48:23.117420   98038 logs.go:284] No container was found matching "coredns"
	I1002 20:48:23.117426   98038 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:48:23.117492   98038 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:48:23.146574   98038 cri.go:89] found id: ""
	I1002 20:48:23.146594   98038 logs.go:282] 0 containers: []
	W1002 20:48:23.146600   98038 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:48:23.146606   98038 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:48:23.146661   98038 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:48:23.174829   98038 cri.go:89] found id: ""
	I1002 20:48:23.174852   98038 logs.go:282] 0 containers: []
	W1002 20:48:23.174861   98038 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:48:23.174869   98038 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:48:23.174936   98038 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:48:23.201072   98038 cri.go:89] found id: ""
	I1002 20:48:23.201090   98038 logs.go:282] 0 containers: []
	W1002 20:48:23.201097   98038 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:48:23.201102   98038 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:48:23.201152   98038 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:48:23.225909   98038 cri.go:89] found id: ""
	I1002 20:48:23.225925   98038 logs.go:282] 0 containers: []
	W1002 20:48:23.225931   98038 logs.go:284] No container was found matching "kindnet"
	I1002 20:48:23.225939   98038 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:48:23.225949   98038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:48:23.289964   98038 logs.go:123] Gathering logs for container status ...
	I1002 20:48:23.289992   98038 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:48:23.322367   98038 logs.go:123] Gathering logs for kubelet ...
	I1002 20:48:23.322391   98038 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:48:23.391673   98038 logs.go:123] Gathering logs for dmesg ...
	I1002 20:48:23.391699   98038 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:48:23.406166   98038 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:48:23.406183   98038 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:48:23.464185   98038 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:48:23.457023    2406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:23.457540    2406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:23.459200    2406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:23.459873    2406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:23.461379    2406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:48:23.457023    2406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:23.457540    2406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:23.459200    2406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:23.459873    2406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:23.461379    2406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	W1002 20:48:23.464203   98038 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.038953ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000716213s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000726752s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000813219s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 20:48:23.464245   98038 out.go:285] * 
	W1002 20:48:23.464310   98038 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.038953ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000716213s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000726752s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000813219s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:48:23.464321   98038 out.go:285] * 
	W1002 20:48:23.465980   98038 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:48:23.470026   98038 out.go:203] 
	W1002 20:48:23.471546   98038 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.038953ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000716213s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000726752s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000813219s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 20:48:23.471609   98038 out.go:285] * 
	I1002 20:48:23.473914   98038 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:48:16 functional-012915 crio[779]: time="2025-10-02T20:48:16.882280083Z" level=info msg="createCtr: removing container e9b28d2f6d2d178d4bc09279f4873ee1a77bc0146fecab44add7af0d518648b7" id=815b712c-20c0-4671-b006-a18168afadc9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:16 functional-012915 crio[779]: time="2025-10-02T20:48:16.882312126Z" level=info msg="createCtr: deleting container e9b28d2f6d2d178d4bc09279f4873ee1a77bc0146fecab44add7af0d518648b7 from storage" id=815b712c-20c0-4671-b006-a18168afadc9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:16 functional-012915 crio[779]: time="2025-10-02T20:48:16.884422248Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-012915_kube-system_7e750209f40bc1241cc38d19476e612c_0" id=815b712c-20c0-4671-b006-a18168afadc9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:17 functional-012915 crio[779]: time="2025-10-02T20:48:17.85549468Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=47460480-ac23-4513-8e93-68dcbb0ef748 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:48:17 functional-012915 crio[779]: time="2025-10-02T20:48:17.856436993Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=9e726918-336b-40e8-b524-2089b8b7cb4a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:48:17 functional-012915 crio[779]: time="2025-10-02T20:48:17.857311155Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-012915/kube-scheduler" id=a72a2506-00e8-4203-b540-32c2cc5bc37d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:17 functional-012915 crio[779]: time="2025-10-02T20:48:17.857556799Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:48:17 functional-012915 crio[779]: time="2025-10-02T20:48:17.86091775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:48:17 functional-012915 crio[779]: time="2025-10-02T20:48:17.861383094Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:48:17 functional-012915 crio[779]: time="2025-10-02T20:48:17.877486361Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a72a2506-00e8-4203-b540-32c2cc5bc37d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:17 functional-012915 crio[779]: time="2025-10-02T20:48:17.878772791Z" level=info msg="createCtr: deleting container ID ca9742ed3553ee2e09a03f5ed4f84aa5ddf601398bfd8d2ad49079a7d3df7b3b from idIndex" id=a72a2506-00e8-4203-b540-32c2cc5bc37d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:17 functional-012915 crio[779]: time="2025-10-02T20:48:17.878853269Z" level=info msg="createCtr: removing container ca9742ed3553ee2e09a03f5ed4f84aa5ddf601398bfd8d2ad49079a7d3df7b3b" id=a72a2506-00e8-4203-b540-32c2cc5bc37d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:17 functional-012915 crio[779]: time="2025-10-02T20:48:17.878888188Z" level=info msg="createCtr: deleting container ca9742ed3553ee2e09a03f5ed4f84aa5ddf601398bfd8d2ad49079a7d3df7b3b from storage" id=a72a2506-00e8-4203-b540-32c2cc5bc37d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:17 functional-012915 crio[779]: time="2025-10-02T20:48:17.881095529Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-012915_kube-system_8a66ab49d7c80b396ab0e8b46c39b696_0" id=a72a2506-00e8-4203-b540-32c2cc5bc37d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:21 functional-012915 crio[779]: time="2025-10-02T20:48:21.855540235Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=a6d581a5-4192-4d7d-a5b0-f013e1953b48 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:48:21 functional-012915 crio[779]: time="2025-10-02T20:48:21.856437623Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=b3a02104-4875-4f1e-9542-208f2fd637aa name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:48:21 functional-012915 crio[779]: time="2025-10-02T20:48:21.857368834Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-012915/kube-apiserver" id=73163973-762c-4ca0-8b05-abe2ac243ceb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:21 functional-012915 crio[779]: time="2025-10-02T20:48:21.857634095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:48:21 functional-012915 crio[779]: time="2025-10-02T20:48:21.861956338Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:48:21 functional-012915 crio[779]: time="2025-10-02T20:48:21.862396164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:48:21 functional-012915 crio[779]: time="2025-10-02T20:48:21.878810114Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=73163973-762c-4ca0-8b05-abe2ac243ceb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:21 functional-012915 crio[779]: time="2025-10-02T20:48:21.880276261Z" level=info msg="createCtr: deleting container ID 5ca8db9234511dc365a1d466ee3e7e7bf9b3459f839180756c7fc40a155533ab from idIndex" id=73163973-762c-4ca0-8b05-abe2ac243ceb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:21 functional-012915 crio[779]: time="2025-10-02T20:48:21.880315023Z" level=info msg="createCtr: removing container 5ca8db9234511dc365a1d466ee3e7e7bf9b3459f839180756c7fc40a155533ab" id=73163973-762c-4ca0-8b05-abe2ac243ceb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:21 functional-012915 crio[779]: time="2025-10-02T20:48:21.88035795Z" level=info msg="createCtr: deleting container 5ca8db9234511dc365a1d466ee3e7e7bf9b3459f839180756c7fc40a155533ab from storage" id=73163973-762c-4ca0-8b05-abe2ac243ceb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:48:21 functional-012915 crio[779]: time="2025-10-02T20:48:21.882477684Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-012915_kube-system_71bc375daf4e76699563858eee44bc44_0" id=73163973-762c-4ca0-8b05-abe2ac243ceb name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:48:24.361720    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:24.362323    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:24.363936    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:24.364368    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:48:24.365752    2536 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:48:24 up  2:30,  0 user,  load average: 0.00, 0.05, 0.48
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:48:16 functional-012915 kubelet[1773]: E1002 20:48:16.884860    1773 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:48:16 functional-012915 kubelet[1773]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-012915_kube-system(7e750209f40bc1241cc38d19476e612c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:48:16 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:48:16 functional-012915 kubelet[1773]: E1002 20:48:16.884893    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-012915" podUID="7e750209f40bc1241cc38d19476e612c"
	Oct 02 20:48:17 functional-012915 kubelet[1773]: E1002 20:48:17.855068    1773 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 20:48:17 functional-012915 kubelet[1773]: E1002 20:48:17.881453    1773 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:48:17 functional-012915 kubelet[1773]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:48:17 functional-012915 kubelet[1773]:  > podSandboxID="40e327266da6ea4287d08a8331b8fae96b768bae7d96ad99222891f51d752347"
	Oct 02 20:48:17 functional-012915 kubelet[1773]: E1002 20:48:17.881560    1773 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:48:17 functional-012915 kubelet[1773]:         container kube-scheduler start failed in pod kube-scheduler-functional-012915_kube-system(8a66ab49d7c80b396ab0e8b46c39b696): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:48:17 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:48:17 functional-012915 kubelet[1773]: E1002 20:48:17.881594    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-012915" podUID="8a66ab49d7c80b396ab0e8b46c39b696"
	Oct 02 20:48:19 functional-012915 kubelet[1773]: E1002 20:48:19.477386    1773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:48:19 functional-012915 kubelet[1773]: I1002 20:48:19.636369    1773 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 20:48:19 functional-012915 kubelet[1773]: E1002 20:48:19.636855    1773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 20:48:20 functional-012915 kubelet[1773]: E1002 20:48:20.403486    1773 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac76a13674072  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 20:44:22.84759461 +0000 UTC m=+0.324743301,LastTimestamp:2025-10-02 20:44:22.84759461 +0000 UTC m=+0.324743301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-012915,}"
	Oct 02 20:48:21 functional-012915 kubelet[1773]: E1002 20:48:21.855107    1773 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 20:48:21 functional-012915 kubelet[1773]: E1002 20:48:21.882784    1773 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:48:21 functional-012915 kubelet[1773]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:48:21 functional-012915 kubelet[1773]:  > podSandboxID="c697c06eaaf20ef2888311ed130f6d0dab82776628f2d6e3d184e9abb1e08331"
	Oct 02 20:48:21 functional-012915 kubelet[1773]: E1002 20:48:21.882884    1773 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:48:21 functional-012915 kubelet[1773]:         container kube-apiserver start failed in pod kube-apiserver-functional-012915_kube-system(71bc375daf4e76699563858eee44bc44): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:48:21 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:48:21 functional-012915 kubelet[1773]: E1002 20:48:21.882917    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-012915" podUID="71bc375daf4e76699563858eee44bc44"
	Oct 02 20:48:22 functional-012915 kubelet[1773]: E1002 20:48:22.870968    1773 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-012915\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 6 (286.799746ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:48:24.741955  103326 status.go:458] kubeconfig endpoint: get endpoint: "functional-012915" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (498.28s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (366.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1002 20:48:24.757336   84100 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012915 --alsologtostderr -v=8
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-012915 --alsologtostderr -v=8: exit status 80 (6m3.833864291s)

                                                
                                                
-- stdout --
	* [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 20:48:24.799042  103439 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:48:24.799301  103439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:24.799310  103439 out.go:374] Setting ErrFile to fd 2...
	I1002 20:48:24.799319  103439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:24.799517  103439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:48:24.799997  103439 out.go:368] Setting JSON to false
	I1002 20:48:24.800864  103439 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9046,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:48:24.800953  103439 start.go:140] virtualization: kvm guest
	I1002 20:48:24.803402  103439 out.go:179] * [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:48:24.804691  103439 notify.go:220] Checking for updates...
	I1002 20:48:24.804714  103439 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:48:24.806239  103439 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:48:24.807535  103439 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:24.808966  103439 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:48:24.810229  103439 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:48:24.811490  103439 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:48:24.813239  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:24.813364  103439 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:48:24.837336  103439 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:48:24.837438  103439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:48:24.897484  103439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:48:24.886469072 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:48:24.897616  103439 docker.go:318] overlay module found
	I1002 20:48:24.900384  103439 out.go:179] * Using the docker driver based on existing profile
	I1002 20:48:24.901640  103439 start.go:304] selected driver: docker
	I1002 20:48:24.901656  103439 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:24.901817  103439 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:48:24.901921  103439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:48:24.957281  103439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:48:24.94713494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:48:24.957915  103439 cni.go:84] Creating CNI manager for ""
	I1002 20:48:24.957982  103439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:48:24.958030  103439 start.go:348] cluster config:
	{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:24.959902  103439 out.go:179] * Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	I1002 20:48:24.961424  103439 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 20:48:24.962912  103439 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:48:24.964111  103439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:48:24.964148  103439 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:48:24.964157  103439 cache.go:58] Caching tarball of preloaded images
	I1002 20:48:24.964205  103439 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:48:24.964264  103439 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:48:24.964275  103439 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:48:24.964363  103439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json ...
	I1002 20:48:24.984848  103439 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:48:24.984867  103439 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:48:24.984883  103439 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:48:24.984905  103439 start.go:360] acquireMachinesLock for functional-012915: {Name:mk05b0465db6f8234fcb55c21a78a37886923b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:48:24.984974  103439 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "functional-012915"
	I1002 20:48:24.984991  103439 start.go:96] Skipping create...Using existing machine configuration
	I1002 20:48:24.984998  103439 fix.go:54] fixHost starting: 
	I1002 20:48:24.985199  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:25.001871  103439 fix.go:112] recreateIfNeeded on functional-012915: state=Running err=<nil>
	W1002 20:48:25.001898  103439 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 20:48:25.003929  103439 out.go:252] * Updating the running docker "functional-012915" container ...
	I1002 20:48:25.003964  103439 machine.go:93] provisionDockerMachine start ...
	I1002 20:48:25.004037  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.020996  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.021230  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.021243  103439 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:48:25.163676  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:48:25.163710  103439 ubuntu.go:182] provisioning hostname "functional-012915"
	I1002 20:48:25.163781  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.181773  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.181995  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.182012  103439 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-012915 && echo "functional-012915" | sudo tee /etc/hostname
	I1002 20:48:25.333959  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:48:25.334023  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.352331  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.352586  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.352605  103439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-012915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-012915/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-012915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:48:25.495627  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:48:25.495660  103439 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 20:48:25.495680  103439 ubuntu.go:190] setting up certificates
	I1002 20:48:25.495691  103439 provision.go:84] configureAuth start
	I1002 20:48:25.495761  103439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:48:25.513229  103439 provision.go:143] copyHostCerts
	I1002 20:48:25.513269  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:48:25.513297  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 20:48:25.513309  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:48:25.513378  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 20:48:25.513471  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:48:25.513489  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 20:48:25.513496  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:48:25.513524  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 20:48:25.513585  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:48:25.513606  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 20:48:25.513612  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:48:25.513642  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 20:48:25.513706  103439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.functional-012915 san=[127.0.0.1 192.168.49.2 functional-012915 localhost minikube]
	I1002 20:48:25.699700  103439 provision.go:177] copyRemoteCerts
	I1002 20:48:25.699774  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:48:25.699818  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.717132  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:25.819529  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:48:25.819590  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:48:25.836961  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:48:25.837026  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:48:25.853991  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:48:25.854053  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:48:25.872348  103439 provision.go:87] duration metric: took 376.642239ms to configureAuth
	I1002 20:48:25.872378  103439 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:48:25.872536  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:25.872653  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.891454  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.891685  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.891706  103439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:48:26.156804  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:48:26.156829  103439 machine.go:96] duration metric: took 1.152858016s to provisionDockerMachine
	I1002 20:48:26.156858  103439 start.go:293] postStartSetup for "functional-012915" (driver="docker")
	I1002 20:48:26.156868  103439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:48:26.156920  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:48:26.156969  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.176188  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.278892  103439 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:48:26.282350  103439 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 20:48:26.282380  103439 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 20:48:26.282385  103439 command_runner.go:130] > VERSION_ID="12"
	I1002 20:48:26.282389  103439 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 20:48:26.282393  103439 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 20:48:26.282397  103439 command_runner.go:130] > ID=debian
	I1002 20:48:26.282401  103439 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 20:48:26.282406  103439 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 20:48:26.282410  103439 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 20:48:26.282454  103439 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:48:26.282471  103439 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:48:26.282480  103439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 20:48:26.282532  103439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 20:48:26.282613  103439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 20:48:26.282622  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 20:48:26.282689  103439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> hosts in /etc/test/nested/copy/84100
	I1002 20:48:26.282696  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> /etc/test/nested/copy/84100/hosts
	I1002 20:48:26.282728  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/84100
	I1002 20:48:26.291027  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:48:26.308674  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts --> /etc/test/nested/copy/84100/hosts (40 bytes)
	I1002 20:48:26.325806  103439 start.go:296] duration metric: took 168.930408ms for postStartSetup
	I1002 20:48:26.325916  103439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:48:26.325957  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.343664  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.443702  103439 command_runner.go:130] > 54%
	I1002 20:48:26.443812  103439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:48:26.449039  103439 command_runner.go:130] > 135G
	I1002 20:48:26.449077  103439 fix.go:56] duration metric: took 1.464076482s for fixHost
	I1002 20:48:26.449092  103439 start.go:83] releasing machines lock for "functional-012915", held for 1.464107586s
	I1002 20:48:26.449173  103439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:48:26.467196  103439 ssh_runner.go:195] Run: cat /version.json
	I1002 20:48:26.467258  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.467342  103439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:48:26.467420  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.485438  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.485701  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.633417  103439 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 20:48:26.635353  103439 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 20:48:26.635549  103439 ssh_runner.go:195] Run: systemctl --version
	I1002 20:48:26.642439  103439 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 20:48:26.642484  103439 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 20:48:26.642544  103439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:48:26.678549  103439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 20:48:26.683206  103439 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 20:48:26.683277  103439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:48:26.683333  103439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:48:26.691349  103439 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:48:26.691374  103439 start.go:495] detecting cgroup driver to use...
	I1002 20:48:26.691404  103439 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:48:26.691448  103439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:48:26.705612  103439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:48:26.718317  103439 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:48:26.718372  103439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:48:26.732790  103439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:48:26.745127  103439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:48:26.830208  103439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:48:26.916089  103439 docker.go:234] disabling docker service ...
	I1002 20:48:26.916158  103439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:48:26.931041  103439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:48:26.944314  103439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:48:27.029050  103439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:48:27.113127  103439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:48:27.125650  103439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:48:27.138813  103439 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 20:48:27.139624  103439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:48:27.139683  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.148622  103439 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:48:27.148678  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.157772  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.166537  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.175276  103439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:48:27.183311  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.192091  103439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.200250  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.208827  103439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:48:27.216057  103439 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 20:48:27.216134  103439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:48:27.223341  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:27.309631  103439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:48:27.427286  103439 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:48:27.427366  103439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:48:27.431839  103439 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 20:48:27.431866  103439 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 20:48:27.431885  103439 command_runner.go:130] > Device: 0,59	Inode: 3822        Links: 1
	I1002 20:48:27.431892  103439 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:48:27.431897  103439 command_runner.go:130] > Access: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431903  103439 command_runner.go:130] > Modify: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431907  103439 command_runner.go:130] > Change: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431912  103439 command_runner.go:130] >  Birth: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431962  103439 start.go:563] Will wait 60s for crictl version
	I1002 20:48:27.432014  103439 ssh_runner.go:195] Run: which crictl
	I1002 20:48:27.435939  103439 command_runner.go:130] > /usr/local/bin/crictl
	I1002 20:48:27.436036  103439 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:48:27.458416  103439 command_runner.go:130] > Version:  0.1.0
	I1002 20:48:27.458438  103439 command_runner.go:130] > RuntimeName:  cri-o
	I1002 20:48:27.458443  103439 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 20:48:27.458448  103439 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 20:48:27.460155  103439 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:48:27.460222  103439 ssh_runner.go:195] Run: crio --version
	I1002 20:48:27.486159  103439 command_runner.go:130] > crio version 1.34.1
	I1002 20:48:27.486183  103439 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:48:27.486190  103439 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:48:27.486198  103439 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:48:27.486205  103439 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:48:27.486212  103439 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:48:27.486219  103439 command_runner.go:130] >    Compiler:       gc
	I1002 20:48:27.486226  103439 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:48:27.486237  103439 command_runner.go:130] >    Linkmode:       static
	I1002 20:48:27.486246  103439 command_runner.go:130] >    BuildTags:
	I1002 20:48:27.486251  103439 command_runner.go:130] >      static
	I1002 20:48:27.486259  103439 command_runner.go:130] >      netgo
	I1002 20:48:27.486263  103439 command_runner.go:130] >      osusergo
	I1002 20:48:27.486266  103439 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:48:27.486272  103439 command_runner.go:130] >      seccomp
	I1002 20:48:27.486276  103439 command_runner.go:130] >      apparmor
	I1002 20:48:27.486300  103439 command_runner.go:130] >      selinux
	I1002 20:48:27.486312  103439 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:48:27.486330  103439 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:48:27.486339  103439 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:48:27.487532  103439 ssh_runner.go:195] Run: crio --version
	I1002 20:48:27.514593  103439 command_runner.go:130] > crio version 1.34.1
	I1002 20:48:27.514624  103439 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:48:27.514630  103439 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:48:27.514634  103439 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:48:27.514639  103439 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:48:27.514643  103439 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:48:27.514647  103439 command_runner.go:130] >    Compiler:       gc
	I1002 20:48:27.514654  103439 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:48:27.514658  103439 command_runner.go:130] >    Linkmode:       static
	I1002 20:48:27.514662  103439 command_runner.go:130] >    BuildTags:
	I1002 20:48:27.514665  103439 command_runner.go:130] >      static
	I1002 20:48:27.514668  103439 command_runner.go:130] >      netgo
	I1002 20:48:27.514677  103439 command_runner.go:130] >      osusergo
	I1002 20:48:27.514685  103439 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:48:27.514688  103439 command_runner.go:130] >      seccomp
	I1002 20:48:27.514691  103439 command_runner.go:130] >      apparmor
	I1002 20:48:27.514695  103439 command_runner.go:130] >      selinux
	I1002 20:48:27.514699  103439 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:48:27.514706  103439 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:48:27.514709  103439 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:48:27.516768  103439 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:48:27.518063  103439 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:48:27.535001  103439 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:48:27.539645  103439 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 20:48:27.539759  103439 kubeadm.go:883] updating cluster {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:48:27.539875  103439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:48:27.539928  103439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:48:27.571471  103439 command_runner.go:130] > {
	I1002 20:48:27.571489  103439 command_runner.go:130] >   "images":  [
	I1002 20:48:27.571493  103439 command_runner.go:130] >     {
	I1002 20:48:27.571502  103439 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:48:27.571507  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571513  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:48:27.571516  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571520  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571528  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:48:27.571535  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:48:27.571539  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571543  103439 command_runner.go:130] >       "size":  "109379124",
	I1002 20:48:27.571547  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571554  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571560  103439 command_runner.go:130] >     },
	I1002 20:48:27.571568  103439 command_runner.go:130] >     {
	I1002 20:48:27.571574  103439 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:48:27.571577  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571583  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:48:27.571588  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571592  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571600  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:48:27.571610  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:48:27.571616  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571620  103439 command_runner.go:130] >       "size":  "31470524",
	I1002 20:48:27.571626  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571633  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571644  103439 command_runner.go:130] >     },
	I1002 20:48:27.571650  103439 command_runner.go:130] >     {
	I1002 20:48:27.571656  103439 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:48:27.571662  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571667  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:48:27.571672  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571676  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571685  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:48:27.571694  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:48:27.571700  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571704  103439 command_runner.go:130] >       "size":  "76103547",
	I1002 20:48:27.571710  103439 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:48:27.571714  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571719  103439 command_runner.go:130] >     },
	I1002 20:48:27.571721  103439 command_runner.go:130] >     {
	I1002 20:48:27.571727  103439 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:48:27.571733  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571752  103439 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:48:27.571758  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571767  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571778  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:48:27.571787  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:48:27.571792  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571796  103439 command_runner.go:130] >       "size":  "195976448",
	I1002 20:48:27.571802  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571805  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.571810  103439 command_runner.go:130] >       },
	I1002 20:48:27.571824  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571831  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571834  103439 command_runner.go:130] >     },
	I1002 20:48:27.571838  103439 command_runner.go:130] >     {
	I1002 20:48:27.571844  103439 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:48:27.571850  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571859  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:48:27.571866  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571870  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571879  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:48:27.571888  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:48:27.571894  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571898  103439 command_runner.go:130] >       "size":  "89046001",
	I1002 20:48:27.571903  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571907  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.571913  103439 command_runner.go:130] >       },
	I1002 20:48:27.571916  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571922  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571925  103439 command_runner.go:130] >     },
	I1002 20:48:27.571931  103439 command_runner.go:130] >     {
	I1002 20:48:27.571937  103439 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:48:27.571943  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571948  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:48:27.571953  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571957  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571967  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:48:27.571976  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:48:27.571981  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571985  103439 command_runner.go:130] >       "size":  "76004181",
	I1002 20:48:27.571991  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571994  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.572000  103439 command_runner.go:130] >       },
	I1002 20:48:27.572003  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572009  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572012  103439 command_runner.go:130] >     },
	I1002 20:48:27.572015  103439 command_runner.go:130] >     {
	I1002 20:48:27.572023  103439 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:48:27.572027  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572038  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:48:27.572048  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572054  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572061  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:48:27.572070  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:48:27.572076  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572080  103439 command_runner.go:130] >       "size":  "73138073",
	I1002 20:48:27.572085  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572089  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572095  103439 command_runner.go:130] >     },
	I1002 20:48:27.572098  103439 command_runner.go:130] >     {
	I1002 20:48:27.572106  103439 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:48:27.572109  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572114  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:48:27.572119  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572123  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572132  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:48:27.572157  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:48:27.572163  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572167  103439 command_runner.go:130] >       "size":  "53844823",
	I1002 20:48:27.572172  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.572175  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.572180  103439 command_runner.go:130] >       },
	I1002 20:48:27.572184  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572189  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572192  103439 command_runner.go:130] >     },
	I1002 20:48:27.572197  103439 command_runner.go:130] >     {
	I1002 20:48:27.572203  103439 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:48:27.572206  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572213  103439 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.572217  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572222  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572229  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:48:27.572237  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:48:27.572248  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572254  103439 command_runner.go:130] >       "size":  "742092",
	I1002 20:48:27.572258  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.572263  103439 command_runner.go:130] >         "value":  "65535"
	I1002 20:48:27.572267  103439 command_runner.go:130] >       },
	I1002 20:48:27.572273  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572282  103439 command_runner.go:130] >       "pinned":  true
	I1002 20:48:27.572288  103439 command_runner.go:130] >     }
	I1002 20:48:27.572291  103439 command_runner.go:130] >   ]
	I1002 20:48:27.572295  103439 command_runner.go:130] > }
	I1002 20:48:27.573606  103439 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:48:27.573628  103439 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:48:27.573687  103439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:48:27.599395  103439 command_runner.go:130] > {
	I1002 20:48:27.599418  103439 command_runner.go:130] >   "images":  [
	I1002 20:48:27.599424  103439 command_runner.go:130] >     {
	I1002 20:48:27.599434  103439 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:48:27.599439  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599447  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:48:27.599452  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599460  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599473  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:48:27.599500  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:48:27.599510  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599518  103439 command_runner.go:130] >       "size":  "109379124",
	I1002 20:48:27.599526  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599540  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599549  103439 command_runner.go:130] >     },
	I1002 20:48:27.599555  103439 command_runner.go:130] >     {
	I1002 20:48:27.599575  103439 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:48:27.599582  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599590  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:48:27.599596  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599604  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599624  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:48:27.599640  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:48:27.599648  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599656  103439 command_runner.go:130] >       "size":  "31470524",
	I1002 20:48:27.599664  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599676  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599684  103439 command_runner.go:130] >     },
	I1002 20:48:27.599690  103439 command_runner.go:130] >     {
	I1002 20:48:27.599703  103439 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:48:27.599713  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599722  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:48:27.599730  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599754  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599770  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:48:27.599783  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:48:27.599791  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599798  103439 command_runner.go:130] >       "size":  "76103547",
	I1002 20:48:27.599808  103439 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:48:27.599815  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599823  103439 command_runner.go:130] >     },
	I1002 20:48:27.599829  103439 command_runner.go:130] >     {
	I1002 20:48:27.599840  103439 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:48:27.599849  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599858  103439 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:48:27.599865  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599873  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599887  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:48:27.599901  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:48:27.599918  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599927  103439 command_runner.go:130] >       "size":  "195976448",
	I1002 20:48:27.599934  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.599942  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.599948  103439 command_runner.go:130] >       },
	I1002 20:48:27.599974  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599984  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599989  103439 command_runner.go:130] >     },
	I1002 20:48:27.599994  103439 command_runner.go:130] >     {
	I1002 20:48:27.600004  103439 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:48:27.600013  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600021  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:48:27.600029  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600036  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600050  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:48:27.600065  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:48:27.600073  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600080  103439 command_runner.go:130] >       "size":  "89046001",
	I1002 20:48:27.600089  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600103  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600112  103439 command_runner.go:130] >       },
	I1002 20:48:27.600119  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600128  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600134  103439 command_runner.go:130] >     },
	I1002 20:48:27.600142  103439 command_runner.go:130] >     {
	I1002 20:48:27.600152  103439 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:48:27.600161  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600171  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:48:27.600179  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600185  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600199  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:48:27.600213  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:48:27.600220  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600233  103439 command_runner.go:130] >       "size":  "76004181",
	I1002 20:48:27.600242  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600250  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600258  103439 command_runner.go:130] >       },
	I1002 20:48:27.600264  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600273  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600278  103439 command_runner.go:130] >     },
	I1002 20:48:27.600284  103439 command_runner.go:130] >     {
	I1002 20:48:27.600297  103439 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:48:27.600306  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600315  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:48:27.600332  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600339  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600354  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:48:27.600368  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:48:27.600376  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600383  103439 command_runner.go:130] >       "size":  "73138073",
	I1002 20:48:27.600393  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600401  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600410  103439 command_runner.go:130] >     },
	I1002 20:48:27.600415  103439 command_runner.go:130] >     {
	I1002 20:48:27.600423  103439 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:48:27.600428  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600437  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:48:27.600446  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600452  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600464  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:48:27.600497  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:48:27.600505  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600513  103439 command_runner.go:130] >       "size":  "53844823",
	I1002 20:48:27.600520  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600527  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600536  103439 command_runner.go:130] >       },
	I1002 20:48:27.600554  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600563  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600569  103439 command_runner.go:130] >     },
	I1002 20:48:27.600574  103439 command_runner.go:130] >     {
	I1002 20:48:27.600585  103439 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:48:27.600594  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600603  103439 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.600611  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600618  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600631  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:48:27.600643  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:48:27.600652  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600659  103439 command_runner.go:130] >       "size":  "742092",
	I1002 20:48:27.600668  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600676  103439 command_runner.go:130] >         "value":  "65535"
	I1002 20:48:27.600684  103439 command_runner.go:130] >       },
	I1002 20:48:27.600692  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600701  103439 command_runner.go:130] >       "pinned":  true
	I1002 20:48:27.600708  103439 command_runner.go:130] >     }
	I1002 20:48:27.600716  103439 command_runner.go:130] >   ]
	I1002 20:48:27.600721  103439 command_runner.go:130] > }
	I1002 20:48:27.600844  103439 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:48:27.600859  103439 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:48:27.600868  103439 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:48:27.600982  103439 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:48:27.601057  103439 ssh_runner.go:195] Run: crio config
	I1002 20:48:27.642390  103439 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 20:48:27.642423  103439 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 20:48:27.642435  103439 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 20:48:27.642439  103439 command_runner.go:130] > #
	I1002 20:48:27.642450  103439 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 20:48:27.642460  103439 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 20:48:27.642470  103439 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 20:48:27.642501  103439 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 20:48:27.642510  103439 command_runner.go:130] > # reload'.
	I1002 20:48:27.642520  103439 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 20:48:27.642532  103439 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 20:48:27.642543  103439 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 20:48:27.642558  103439 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 20:48:27.642563  103439 command_runner.go:130] > [crio]
	I1002 20:48:27.642572  103439 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 20:48:27.642580  103439 command_runner.go:130] > # containers images, in this directory.
	I1002 20:48:27.642602  103439 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 20:48:27.642618  103439 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 20:48:27.642627  103439 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 20:48:27.642637  103439 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 20:48:27.642643  103439 command_runner.go:130] > # imagestore = ""
	I1002 20:48:27.642656  103439 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 20:48:27.642670  103439 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 20:48:27.642681  103439 command_runner.go:130] > # storage_driver = "overlay"
	I1002 20:48:27.642691  103439 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 20:48:27.642708  103439 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 20:48:27.642715  103439 command_runner.go:130] > # storage_option = [
	I1002 20:48:27.642723  103439 command_runner.go:130] > # ]
	I1002 20:48:27.642733  103439 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 20:48:27.642762  103439 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 20:48:27.642770  103439 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 20:48:27.642783  103439 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 20:48:27.642796  103439 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 20:48:27.642804  103439 command_runner.go:130] > # always happen on a node reboot
	I1002 20:48:27.642814  103439 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 20:48:27.642844  103439 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 20:48:27.642859  103439 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 20:48:27.642869  103439 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 20:48:27.642883  103439 command_runner.go:130] > # version_file_persist = ""
	I1002 20:48:27.642895  103439 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 20:48:27.642919  103439 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 20:48:27.642930  103439 command_runner.go:130] > # internal_wipe = true
	I1002 20:48:27.642942  103439 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 20:48:27.642957  103439 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 20:48:27.642963  103439 command_runner.go:130] > # internal_repair = true
	I1002 20:48:27.642972  103439 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 20:48:27.642981  103439 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 20:48:27.642990  103439 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 20:48:27.642998  103439 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 20:48:27.643012  103439 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 20:48:27.643018  103439 command_runner.go:130] > [crio.api]
	I1002 20:48:27.643028  103439 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 20:48:27.643038  103439 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 20:48:27.643047  103439 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 20:48:27.643058  103439 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 20:48:27.643068  103439 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 20:48:27.643081  103439 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 20:48:27.643088  103439 command_runner.go:130] > # stream_port = "0"
	I1002 20:48:27.643100  103439 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 20:48:27.643107  103439 command_runner.go:130] > # stream_enable_tls = false
	I1002 20:48:27.643117  103439 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 20:48:27.643126  103439 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 20:48:27.643137  103439 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 20:48:27.643149  103439 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 20:48:27.643154  103439 command_runner.go:130] > # stream_tls_cert = ""
	I1002 20:48:27.643169  103439 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 20:48:27.643178  103439 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 20:48:27.643188  103439 command_runner.go:130] > # stream_tls_key = ""
	I1002 20:48:27.643205  103439 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 20:48:27.643218  103439 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 20:48:27.643228  103439 command_runner.go:130] > # automatically pick up the changes.
	I1002 20:48:27.643241  103439 command_runner.go:130] > # stream_tls_ca = ""
	I1002 20:48:27.643279  103439 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:48:27.643300  103439 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 20:48:27.643322  103439 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:48:27.643333  103439 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 20:48:27.643343  103439 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 20:48:27.643352  103439 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 20:48:27.643370  103439 command_runner.go:130] > [crio.runtime]
	I1002 20:48:27.643381  103439 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 20:48:27.643393  103439 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 20:48:27.643403  103439 command_runner.go:130] > # "nofile=1024:2048"
	I1002 20:48:27.643414  103439 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 20:48:27.643423  103439 command_runner.go:130] > # default_ulimits = [
	I1002 20:48:27.643428  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643441  103439 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 20:48:27.643450  103439 command_runner.go:130] > # no_pivot = false
	I1002 20:48:27.643460  103439 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 20:48:27.643473  103439 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 20:48:27.643482  103439 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 20:48:27.643494  103439 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 20:48:27.643511  103439 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 20:48:27.643524  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:48:27.643532  103439 command_runner.go:130] > # conmon = ""
	I1002 20:48:27.643539  103439 command_runner.go:130] > # Cgroup setting for conmon
	I1002 20:48:27.643549  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 20:48:27.643556  103439 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 20:48:27.643565  103439 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 20:48:27.643572  103439 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 20:48:27.643582  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:48:27.643588  103439 command_runner.go:130] > # conmon_env = [
	I1002 20:48:27.643592  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643600  103439 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 20:48:27.643612  103439 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 20:48:27.643622  103439 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 20:48:27.643631  103439 command_runner.go:130] > # default_env = [
	I1002 20:48:27.643647  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643661  103439 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 20:48:27.643672  103439 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 20:48:27.643679  103439 command_runner.go:130] > # selinux = false
	I1002 20:48:27.643689  103439 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 20:48:27.643701  103439 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 20:48:27.643710  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643717  103439 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:48:27.643729  103439 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 20:48:27.643755  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643766  103439 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 20:48:27.643777  103439 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 20:48:27.643790  103439 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 20:48:27.643804  103439 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 20:48:27.643815  103439 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 20:48:27.643826  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643834  103439 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 20:48:27.643847  103439 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 20:48:27.643856  103439 command_runner.go:130] > # the cgroup blockio controller.
	I1002 20:48:27.643863  103439 command_runner.go:130] > # blockio_config_file = ""
	I1002 20:48:27.643875  103439 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 20:48:27.643886  103439 command_runner.go:130] > # blockio parameters.
	I1002 20:48:27.643892  103439 command_runner.go:130] > # blockio_reload = false
	I1002 20:48:27.643901  103439 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 20:48:27.643907  103439 command_runner.go:130] > # irqbalance daemon.
	I1002 20:48:27.643914  103439 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 20:48:27.643922  103439 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 20:48:27.643930  103439 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 20:48:27.643939  103439 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 20:48:27.643946  103439 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 20:48:27.643955  103439 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 20:48:27.643967  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643976  103439 command_runner.go:130] > # rdt_config_file = ""
	I1002 20:48:27.643991  103439 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 20:48:27.643998  103439 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 20:48:27.644004  103439 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 20:48:27.644010  103439 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 20:48:27.644016  103439 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 20:48:27.644022  103439 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 20:48:27.644026  103439 command_runner.go:130] > # will be added.
	I1002 20:48:27.644030  103439 command_runner.go:130] > # default_capabilities = [
	I1002 20:48:27.644036  103439 command_runner.go:130] > # 	"CHOWN",
	I1002 20:48:27.644039  103439 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 20:48:27.644042  103439 command_runner.go:130] > # 	"FSETID",
	I1002 20:48:27.644046  103439 command_runner.go:130] > # 	"FOWNER",
	I1002 20:48:27.644049  103439 command_runner.go:130] > # 	"SETGID",
	I1002 20:48:27.644077  103439 command_runner.go:130] > # 	"SETUID",
	I1002 20:48:27.644089  103439 command_runner.go:130] > # 	"SETPCAP",
	I1002 20:48:27.644096  103439 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 20:48:27.644099  103439 command_runner.go:130] > # 	"KILL",
	I1002 20:48:27.644102  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644111  103439 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 20:48:27.644117  103439 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 20:48:27.644124  103439 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 20:48:27.644129  103439 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 20:48:27.644137  103439 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:48:27.644140  103439 command_runner.go:130] > default_sysctls = [
	I1002 20:48:27.644146  103439 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 20:48:27.644149  103439 command_runner.go:130] > ]
	I1002 20:48:27.644153  103439 command_runner.go:130] > # List of devices on the host that a
	I1002 20:48:27.644159  103439 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 20:48:27.644165  103439 command_runner.go:130] > # allowed_devices = [
	I1002 20:48:27.644168  103439 command_runner.go:130] > # 	"/dev/fuse",
	I1002 20:48:27.644172  103439 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 20:48:27.644177  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644181  103439 command_runner.go:130] > # List of additional devices. specified as
	I1002 20:48:27.644194  103439 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 20:48:27.644201  103439 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 20:48:27.644207  103439 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:48:27.644210  103439 command_runner.go:130] > # additional_devices = [
	I1002 20:48:27.644213  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644218  103439 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 20:48:27.644224  103439 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 20:48:27.644227  103439 command_runner.go:130] > # 	"/etc/cdi",
	I1002 20:48:27.644231  103439 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 20:48:27.644235  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644241  103439 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 20:48:27.644249  103439 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 20:48:27.644253  103439 command_runner.go:130] > # Defaults to false.
	I1002 20:48:27.644259  103439 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 20:48:27.644265  103439 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 20:48:27.644272  103439 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 20:48:27.644275  103439 command_runner.go:130] > # hooks_dir = [
	I1002 20:48:27.644280  103439 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 20:48:27.644283  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644289  103439 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 20:48:27.644297  103439 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 20:48:27.644302  103439 command_runner.go:130] > # its default mounts from the following two files:
	I1002 20:48:27.644305  103439 command_runner.go:130] > #
	I1002 20:48:27.644310  103439 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 20:48:27.644323  103439 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 20:48:27.644329  103439 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 20:48:27.644334  103439 command_runner.go:130] > #
	I1002 20:48:27.644340  103439 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 20:48:27.644346  103439 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 20:48:27.644352  103439 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 20:48:27.644356  103439 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 20:48:27.644359  103439 command_runner.go:130] > #
	I1002 20:48:27.644363  103439 command_runner.go:130] > # default_mounts_file = ""
	I1002 20:48:27.644377  103439 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 20:48:27.644385  103439 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 20:48:27.644389  103439 command_runner.go:130] > # pids_limit = -1
	I1002 20:48:27.644397  103439 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 20:48:27.644403  103439 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 20:48:27.644409  103439 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 20:48:27.644418  103439 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 20:48:27.644422  103439 command_runner.go:130] > # log_size_max = -1
	I1002 20:48:27.644430  103439 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 20:48:27.644434  103439 command_runner.go:130] > # log_to_journald = false
	I1002 20:48:27.644439  103439 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 20:48:27.644444  103439 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 20:48:27.644450  103439 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 20:48:27.644454  103439 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 20:48:27.644461  103439 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 20:48:27.644465  103439 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 20:48:27.644470  103439 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 20:48:27.644473  103439 command_runner.go:130] > # read_only = false
	I1002 20:48:27.644482  103439 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 20:48:27.644490  103439 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 20:48:27.644494  103439 command_runner.go:130] > # live configuration reload.
	I1002 20:48:27.644500  103439 command_runner.go:130] > # log_level = "info"
	I1002 20:48:27.644505  103439 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 20:48:27.644509  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.644512  103439 command_runner.go:130] > # log_filter = ""
	I1002 20:48:27.644518  103439 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 20:48:27.644525  103439 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 20:48:27.644529  103439 command_runner.go:130] > # separated by comma.
	I1002 20:48:27.644536  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644542  103439 command_runner.go:130] > # uid_mappings = ""
	I1002 20:48:27.644547  103439 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 20:48:27.644552  103439 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 20:48:27.644559  103439 command_runner.go:130] > # separated by comma.
	I1002 20:48:27.644573  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644579  103439 command_runner.go:130] > # gid_mappings = ""
	I1002 20:48:27.644585  103439 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 20:48:27.644591  103439 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:48:27.644598  103439 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:48:27.644606  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644611  103439 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 20:48:27.644617  103439 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 20:48:27.644625  103439 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:48:27.644631  103439 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:48:27.644640  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644644  103439 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 20:48:27.644652  103439 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 20:48:27.644657  103439 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 20:48:27.644665  103439 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 20:48:27.644668  103439 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 20:48:27.644673  103439 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 20:48:27.644679  103439 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 20:48:27.644686  103439 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 20:48:27.644690  103439 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 20:48:27.644693  103439 command_runner.go:130] > # drop_infra_ctr = true
	I1002 20:48:27.644699  103439 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 20:48:27.644706  103439 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 20:48:27.644712  103439 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 20:48:27.644718  103439 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 20:48:27.644726  103439 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 20:48:27.644733  103439 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 20:48:27.644752  103439 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 20:48:27.644764  103439 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 20:48:27.644769  103439 command_runner.go:130] > # shared_cpuset = ""
	I1002 20:48:27.644777  103439 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 20:48:27.644782  103439 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 20:48:27.644785  103439 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 20:48:27.644798  103439 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 20:48:27.644804  103439 command_runner.go:130] > # pinns_path = ""
	I1002 20:48:27.644810  103439 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 20:48:27.644817  103439 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 20:48:27.644821  103439 command_runner.go:130] > # enable_criu_support = true
	I1002 20:48:27.644826  103439 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 20:48:27.644831  103439 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 20:48:27.644837  103439 command_runner.go:130] > # enable_pod_events = false
	I1002 20:48:27.644842  103439 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 20:48:27.644849  103439 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 20:48:27.644853  103439 command_runner.go:130] > # default_runtime = "crun"
	I1002 20:48:27.644858  103439 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 20:48:27.644867  103439 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 20:48:27.644876  103439 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 20:48:27.644882  103439 command_runner.go:130] > # creation as a file is not desired either.
	I1002 20:48:27.644890  103439 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 20:48:27.644896  103439 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 20:48:27.644900  103439 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 20:48:27.644905  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644911  103439 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 20:48:27.644919  103439 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 20:48:27.644925  103439 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 20:48:27.644930  103439 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 20:48:27.644932  103439 command_runner.go:130] > #
	I1002 20:48:27.644937  103439 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 20:48:27.644943  103439 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 20:48:27.644947  103439 command_runner.go:130] > # runtime_type = "oci"
	I1002 20:48:27.644951  103439 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 20:48:27.644955  103439 command_runner.go:130] > # inherit_default_runtime = false
	I1002 20:48:27.644959  103439 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 20:48:27.644963  103439 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 20:48:27.644968  103439 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 20:48:27.644972  103439 command_runner.go:130] > # monitor_env = []
	I1002 20:48:27.644980  103439 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 20:48:27.644987  103439 command_runner.go:130] > # allowed_annotations = []
	I1002 20:48:27.644992  103439 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 20:48:27.644998  103439 command_runner.go:130] > # no_sync_log = false
	I1002 20:48:27.645001  103439 command_runner.go:130] > # default_annotations = {}
	I1002 20:48:27.645007  103439 command_runner.go:130] > # stream_websockets = false
	I1002 20:48:27.645011  103439 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:48:27.645086  103439 command_runner.go:130] > # Where:
	I1002 20:48:27.645099  103439 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 20:48:27.645104  103439 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 20:48:27.645110  103439 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 20:48:27.645115  103439 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 20:48:27.645119  103439 command_runner.go:130] > #   in $PATH.
	I1002 20:48:27.645124  103439 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 20:48:27.645131  103439 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 20:48:27.645137  103439 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 20:48:27.645142  103439 command_runner.go:130] > #   state.
	I1002 20:48:27.645148  103439 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 20:48:27.645156  103439 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 20:48:27.645161  103439 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 20:48:27.645173  103439 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 20:48:27.645180  103439 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 20:48:27.645186  103439 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 20:48:27.645191  103439 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 20:48:27.645197  103439 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 20:48:27.645205  103439 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 20:48:27.645216  103439 command_runner.go:130] > #   The currently recognized values are:
	I1002 20:48:27.645224  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 20:48:27.645231  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 20:48:27.645239  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 20:48:27.645245  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 20:48:27.645254  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 20:48:27.645259  103439 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 20:48:27.645276  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 20:48:27.645284  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 20:48:27.645296  103439 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 20:48:27.645301  103439 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 20:48:27.645309  103439 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 20:48:27.645320  103439 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 20:48:27.645327  103439 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 20:48:27.645333  103439 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 20:48:27.645341  103439 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 20:48:27.645348  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 20:48:27.645355  103439 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 20:48:27.645360  103439 command_runner.go:130] > #   deprecated option "conmon".
	I1002 20:48:27.645368  103439 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 20:48:27.645373  103439 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 20:48:27.645381  103439 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 20:48:27.645385  103439 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 20:48:27.645392  103439 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 20:48:27.645398  103439 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 20:48:27.645405  103439 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 20:48:27.645410  103439 command_runner.go:130] > #   conmon-rs by using:
	I1002 20:48:27.645417  103439 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 20:48:27.645426  103439 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 20:48:27.645433  103439 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 20:48:27.645441  103439 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 20:48:27.645446  103439 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 20:48:27.645454  103439 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 20:48:27.645461  103439 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 20:48:27.645468  103439 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 20:48:27.645475  103439 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 20:48:27.645484  103439 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 20:48:27.645490  103439 command_runner.go:130] > #   when a machine crash happens.
	I1002 20:48:27.645496  103439 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 20:48:27.645505  103439 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 20:48:27.645517  103439 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 20:48:27.645523  103439 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 20:48:27.645529  103439 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 20:48:27.645542  103439 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 20:48:27.645548  103439 command_runner.go:130] > #
	I1002 20:48:27.645552  103439 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 20:48:27.645555  103439 command_runner.go:130] > #
	I1002 20:48:27.645560  103439 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 20:48:27.645569  103439 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 20:48:27.645573  103439 command_runner.go:130] > #
	I1002 20:48:27.645578  103439 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 20:48:27.645586  103439 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 20:48:27.645589  103439 command_runner.go:130] > #
	I1002 20:48:27.645595  103439 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 20:48:27.645598  103439 command_runner.go:130] > # feature.
	I1002 20:48:27.645601  103439 command_runner.go:130] > #
	I1002 20:48:27.645606  103439 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 20:48:27.645615  103439 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 20:48:27.645622  103439 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 20:48:27.645627  103439 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 20:48:27.645635  103439 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 20:48:27.645637  103439 command_runner.go:130] > #
	I1002 20:48:27.645643  103439 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 20:48:27.645651  103439 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 20:48:27.645653  103439 command_runner.go:130] > #
	I1002 20:48:27.645662  103439 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 20:48:27.645672  103439 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 20:48:27.645676  103439 command_runner.go:130] > #
	I1002 20:48:27.645682  103439 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 20:48:27.645690  103439 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 20:48:27.645693  103439 command_runner.go:130] > # limitation.
	I1002 20:48:27.645697  103439 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 20:48:27.645701  103439 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 20:48:27.645709  103439 command_runner.go:130] > runtime_type = ""
	I1002 20:48:27.645715  103439 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 20:48:27.645725  103439 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:48:27.645731  103439 command_runner.go:130] > runtime_config_path = ""
	I1002 20:48:27.645746  103439 command_runner.go:130] > container_min_memory = ""
	I1002 20:48:27.645754  103439 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:48:27.645762  103439 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:48:27.645768  103439 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:48:27.645777  103439 command_runner.go:130] > allowed_annotations = [
	I1002 20:48:27.645783  103439 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 20:48:27.645788  103439 command_runner.go:130] > ]
	I1002 20:48:27.645792  103439 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:48:27.645796  103439 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 20:48:27.645803  103439 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 20:48:27.645807  103439 command_runner.go:130] > runtime_type = ""
	I1002 20:48:27.645811  103439 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 20:48:27.645815  103439 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:48:27.645818  103439 command_runner.go:130] > runtime_config_path = ""
	I1002 20:48:27.645822  103439 command_runner.go:130] > container_min_memory = ""
	I1002 20:48:27.645826  103439 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:48:27.645830  103439 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:48:27.645834  103439 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:48:27.645838  103439 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:48:27.645844  103439 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 20:48:27.645852  103439 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 20:48:27.645857  103439 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 20:48:27.645866  103439 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 20:48:27.645875  103439 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 20:48:27.645886  103439 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 20:48:27.645894  103439 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 20:48:27.645899  103439 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 20:48:27.645907  103439 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 20:48:27.645917  103439 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 20:48:27.645930  103439 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 20:48:27.645940  103439 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 20:48:27.645943  103439 command_runner.go:130] > # Example:
	I1002 20:48:27.645949  103439 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 20:48:27.645953  103439 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 20:48:27.645960  103439 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 20:48:27.645966  103439 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 20:48:27.645972  103439 command_runner.go:130] > # cpuset = "0-1"
	I1002 20:48:27.645975  103439 command_runner.go:130] > # cpushares = "5"
	I1002 20:48:27.645979  103439 command_runner.go:130] > # cpuquota = "1000"
	I1002 20:48:27.645982  103439 command_runner.go:130] > # cpuperiod = "100000"
	I1002 20:48:27.645986  103439 command_runner.go:130] > # cpulimit = "35"
	I1002 20:48:27.645989  103439 command_runner.go:130] > # Where:
	I1002 20:48:27.645993  103439 command_runner.go:130] > # The workload name is workload-type.
	I1002 20:48:27.646000  103439 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 20:48:27.646006  103439 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 20:48:27.646011  103439 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 20:48:27.646021  103439 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 20:48:27.646026  103439 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 20:48:27.646034  103439 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 20:48:27.646044  103439 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 20:48:27.646052  103439 command_runner.go:130] > # Default value is set to true
	I1002 20:48:27.646058  103439 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 20:48:27.646068  103439 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 20:48:27.646074  103439 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 20:48:27.646083  103439 command_runner.go:130] > # Default value is set to 'false'
	I1002 20:48:27.646092  103439 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 20:48:27.646104  103439 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 20:48:27.646118  103439 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 20:48:27.646127  103439 command_runner.go:130] > # timezone = ""
	I1002 20:48:27.646136  103439 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 20:48:27.646144  103439 command_runner.go:130] > #
	I1002 20:48:27.646158  103439 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 20:48:27.646179  103439 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 20:48:27.646188  103439 command_runner.go:130] > [crio.image]
	I1002 20:48:27.646201  103439 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 20:48:27.646209  103439 command_runner.go:130] > # default_transport = "docker://"
	I1002 20:48:27.646217  103439 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 20:48:27.646225  103439 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:48:27.646229  103439 command_runner.go:130] > # global_auth_file = ""
	I1002 20:48:27.646236  103439 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 20:48:27.646241  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.646248  103439 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.646254  103439 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 20:48:27.646260  103439 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:48:27.646265  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.646271  103439 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 20:48:27.646276  103439 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 20:48:27.646281  103439 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 20:48:27.646289  103439 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 20:48:27.646295  103439 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 20:48:27.646301  103439 command_runner.go:130] > # pause_command = "/pause"
	I1002 20:48:27.646306  103439 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 20:48:27.646316  103439 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 20:48:27.646323  103439 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 20:48:27.646329  103439 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 20:48:27.646336  103439 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 20:48:27.646342  103439 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 20:48:27.646345  103439 command_runner.go:130] > # pinned_images = [
	I1002 20:48:27.646348  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646354  103439 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 20:48:27.646362  103439 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 20:48:27.646368  103439 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 20:48:27.646376  103439 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 20:48:27.646381  103439 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 20:48:27.646386  103439 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 20:48:27.646399  103439 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 20:48:27.646411  103439 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 20:48:27.646423  103439 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 20:48:27.646436  103439 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 20:48:27.646447  103439 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 20:48:27.646458  103439 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 20:48:27.646470  103439 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 20:48:27.646480  103439 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 20:48:27.646486  103439 command_runner.go:130] > # changing them here.
	I1002 20:48:27.646491  103439 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 20:48:27.646497  103439 command_runner.go:130] > # insecure_registries = [
	I1002 20:48:27.646500  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646507  103439 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 20:48:27.646516  103439 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 20:48:27.646522  103439 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 20:48:27.646527  103439 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 20:48:27.646531  103439 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 20:48:27.646538  103439 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 20:48:27.646544  103439 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 20:48:27.646551  103439 command_runner.go:130] > # auto_reload_registries = false
	I1002 20:48:27.646557  103439 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 20:48:27.646571  103439 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 20:48:27.646579  103439 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 20:48:27.646583  103439 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 20:48:27.646590  103439 command_runner.go:130] > # The mode of short name resolution.
	I1002 20:48:27.646596  103439 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 20:48:27.646605  103439 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 20:48:27.646611  103439 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 20:48:27.646615  103439 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 20:48:27.646620  103439 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 20:48:27.646628  103439 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 20:48:27.646632  103439 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 20:48:27.646638  103439 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 20:48:27.646649  103439 command_runner.go:130] > # CNI plugins.
	I1002 20:48:27.646655  103439 command_runner.go:130] > [crio.network]
	I1002 20:48:27.646660  103439 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 20:48:27.646667  103439 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 20:48:27.646671  103439 command_runner.go:130] > # cni_default_network = ""
	I1002 20:48:27.646678  103439 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 20:48:27.646682  103439 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 20:48:27.646690  103439 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 20:48:27.646693  103439 command_runner.go:130] > # plugin_dirs = [
	I1002 20:48:27.646696  103439 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 20:48:27.646699  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646703  103439 command_runner.go:130] > # List of included pod metrics.
	I1002 20:48:27.646709  103439 command_runner.go:130] > # included_pod_metrics = [
	I1002 20:48:27.646711  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646716  103439 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 20:48:27.646722  103439 command_runner.go:130] > [crio.metrics]
	I1002 20:48:27.646726  103439 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 20:48:27.646732  103439 command_runner.go:130] > # enable_metrics = false
	I1002 20:48:27.646752  103439 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 20:48:27.646761  103439 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 20:48:27.646767  103439 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 20:48:27.646775  103439 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 20:48:27.646783  103439 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 20:48:27.646787  103439 command_runner.go:130] > # metrics_collectors = [
	I1002 20:48:27.646793  103439 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 20:48:27.646797  103439 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 20:48:27.646800  103439 command_runner.go:130] > # 	"containers_oom_total",
	I1002 20:48:27.646804  103439 command_runner.go:130] > # 	"processes_defunct",
	I1002 20:48:27.646807  103439 command_runner.go:130] > # 	"operations_total",
	I1002 20:48:27.646811  103439 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 20:48:27.646815  103439 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 20:48:27.646818  103439 command_runner.go:130] > # 	"operations_errors_total",
	I1002 20:48:27.646822  103439 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 20:48:27.646831  103439 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 20:48:27.646835  103439 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 20:48:27.646839  103439 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 20:48:27.646842  103439 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 20:48:27.646846  103439 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 20:48:27.646850  103439 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 20:48:27.646853  103439 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 20:48:27.646857  103439 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 20:48:27.646860  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646868  103439 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 20:48:27.646874  103439 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 20:48:27.646880  103439 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 20:48:27.646886  103439 command_runner.go:130] > # metrics_port = 9090
	I1002 20:48:27.646891  103439 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 20:48:27.646901  103439 command_runner.go:130] > # metrics_socket = ""
	I1002 20:48:27.646909  103439 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 20:48:27.646914  103439 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 20:48:27.646922  103439 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 20:48:27.646928  103439 command_runner.go:130] > # certificate on any modification event.
	I1002 20:48:27.646932  103439 command_runner.go:130] > # metrics_cert = ""
	I1002 20:48:27.646939  103439 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 20:48:27.646943  103439 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 20:48:27.646949  103439 command_runner.go:130] > # metrics_key = ""
	I1002 20:48:27.646954  103439 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 20:48:27.646960  103439 command_runner.go:130] > [crio.tracing]
	I1002 20:48:27.646966  103439 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 20:48:27.646971  103439 command_runner.go:130] > # enable_tracing = false
	I1002 20:48:27.646977  103439 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 20:48:27.646983  103439 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 20:48:27.646993  103439 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 20:48:27.646999  103439 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 20:48:27.647003  103439 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 20:48:27.647009  103439 command_runner.go:130] > [crio.nri]
	I1002 20:48:27.647017  103439 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 20:48:27.647023  103439 command_runner.go:130] > # enable_nri = true
	I1002 20:48:27.647032  103439 command_runner.go:130] > # NRI socket to listen on.
	I1002 20:48:27.647038  103439 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 20:48:27.647042  103439 command_runner.go:130] > # NRI plugin directory to use.
	I1002 20:48:27.647049  103439 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 20:48:27.647053  103439 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 20:48:27.647060  103439 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 20:48:27.647065  103439 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 20:48:27.647584  103439 command_runner.go:130] > # nri_disable_connections = false
	I1002 20:48:27.647654  103439 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 20:48:27.647663  103439 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 20:48:27.647672  103439 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 20:48:27.647686  103439 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 20:48:27.647693  103439 command_runner.go:130] > # NRI default validator configuration.
	I1002 20:48:27.647707  103439 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 20:48:27.647731  103439 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 20:48:27.647757  103439 command_runner.go:130] > # can be restricted/rejected:
	I1002 20:48:27.647770  103439 command_runner.go:130] > # - OCI hook injection
	I1002 20:48:27.647779  103439 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 20:48:27.647792  103439 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 20:48:27.647798  103439 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 20:48:27.647805  103439 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 20:48:27.647819  103439 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 20:48:27.647828  103439 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 20:48:27.647837  103439 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 20:48:27.647841  103439 command_runner.go:130] > #
	I1002 20:48:27.647853  103439 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 20:48:27.647859  103439 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 20:48:27.647866  103439 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 20:48:27.647883  103439 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 20:48:27.647891  103439 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 20:48:27.647898  103439 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 20:48:27.647906  103439 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 20:48:27.647916  103439 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 20:48:27.647921  103439 command_runner.go:130] > # ]
	I1002 20:48:27.647929  103439 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 20:48:27.647939  103439 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 20:48:27.647949  103439 command_runner.go:130] > [crio.stats]
	I1002 20:48:27.647958  103439 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 20:48:27.647966  103439 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 20:48:27.647973  103439 command_runner.go:130] > # stats_collection_period = 0
	I1002 20:48:27.647994  103439 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 20:48:27.648004  103439 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 20:48:27.648009  103439 command_runner.go:130] > # collection_period = 0
	I1002 20:48:27.648051  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627189517Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 20:48:27.648070  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627217069Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 20:48:27.648087  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627236914Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 20:48:27.648106  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627255188Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 20:48:27.648122  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.62731995Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.648141  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627489035Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 20:48:27.648161  103439 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 20:48:27.648318  103439 cni.go:84] Creating CNI manager for ""
	I1002 20:48:27.648331  103439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:48:27.648354  103439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:48:27.648401  103439 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-012915 NodeName:functional-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:48:27.648942  103439 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-012915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:48:27.649009  103439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:48:27.657181  103439 command_runner.go:130] > kubeadm
	I1002 20:48:27.657198  103439 command_runner.go:130] > kubectl
	I1002 20:48:27.657203  103439 command_runner.go:130] > kubelet
	I1002 20:48:27.657948  103439 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:48:27.658013  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:48:27.665603  103439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:48:27.678534  103439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:48:27.691111  103439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:48:27.703366  103439 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:48:27.707046  103439 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 20:48:27.707133  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:27.791376  103439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:48:27.804011  103439 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915 for IP: 192.168.49.2
	I1002 20:48:27.804040  103439 certs.go:195] generating shared ca certs ...
	I1002 20:48:27.804056  103439 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:27.804180  103439 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 20:48:27.804232  103439 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 20:48:27.804241  103439 certs.go:257] generating profile certs ...
	I1002 20:48:27.804334  103439 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key
	I1002 20:48:27.804375  103439 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645
	I1002 20:48:27.804412  103439 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key
	I1002 20:48:27.804424  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:48:27.804435  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:48:27.804453  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:48:27.804469  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:48:27.804481  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:48:27.804494  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:48:27.804506  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:48:27.804518  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:48:27.804560  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 20:48:27.804591  103439 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 20:48:27.804601  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:48:27.804623  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:48:27.804645  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:48:27.804666  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 20:48:27.804704  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:48:27.804729  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 20:48:27.804763  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:27.804780  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 20:48:27.805294  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:48:27.822974  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:48:27.840455  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:48:27.858368  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:48:27.877146  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:48:27.895282  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:48:27.912487  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:48:27.929452  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:48:27.947144  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 20:48:27.964177  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:48:27.981785  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 20:48:27.999006  103439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:48:28.011646  103439 ssh_runner.go:195] Run: openssl version
	I1002 20:48:28.017389  103439 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 20:48:28.017621  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 20:48:28.025902  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029403  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029446  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029489  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.063085  103439 command_runner.go:130] > 3ec20f2e
	I1002 20:48:28.063182  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:48:28.071431  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:48:28.080075  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083770  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083829  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083901  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.117894  103439 command_runner.go:130] > b5213941
	I1002 20:48:28.117982  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:48:28.126480  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 20:48:28.135075  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138711  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138759  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138809  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.172582  103439 command_runner.go:130] > 51391683
	I1002 20:48:28.172931  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 20:48:28.180914  103439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:48:28.184555  103439 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:48:28.184579  103439 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 20:48:28.184588  103439 command_runner.go:130] > Device: 8,1	Inode: 811435      Links: 1
	I1002 20:48:28.184598  103439 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:48:28.184608  103439 command_runner.go:130] > Access: 2025-10-02 20:44:21.070069799 +0000
	I1002 20:48:28.184616  103439 command_runner.go:130] > Modify: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184623  103439 command_runner.go:130] > Change: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184628  103439 command_runner.go:130] >  Birth: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184684  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:48:28.218476  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.218920  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:48:28.253813  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.254026  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:48:28.288477  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.288852  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:48:28.322969  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.323293  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:48:28.357073  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.357354  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:48:28.390854  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.391133  103439 kubeadm.go:400] StartCluster: {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:28.391217  103439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:48:28.391280  103439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:48:28.420217  103439 cri.go:89] found id: ""
	I1002 20:48:28.420280  103439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:48:28.427672  103439 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 20:48:28.427700  103439 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 20:48:28.427710  103439 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 20:48:28.428396  103439 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:48:28.428413  103439 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:48:28.428455  103439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:48:28.435936  103439 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:48:28.436039  103439 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-012915" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.436106  103439 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "functional-012915" cluster setting kubeconfig missing "functional-012915" context setting]
	I1002 20:48:28.436458  103439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.437072  103439 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.437245  103439 kapi.go:59] client config for functional-012915: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:48:28.437717  103439 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:48:28.437732  103439 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:48:28.437753  103439 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:48:28.437760  103439 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:48:28.437765  103439 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:48:28.437782  103439 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:48:28.438160  103439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:48:28.446094  103439 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:48:28.446137  103439 kubeadm.go:601] duration metric: took 17.717766ms to restartPrimaryControlPlane
	I1002 20:48:28.446149  103439 kubeadm.go:402] duration metric: took 55.025148ms to StartCluster
	I1002 20:48:28.446168  103439 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.446285  103439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.447035  103439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.447291  103439 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:48:28.447487  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:28.447429  103439 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:48:28.447531  103439 addons.go:69] Setting storage-provisioner=true in profile "functional-012915"
	I1002 20:48:28.447538  103439 addons.go:69] Setting default-storageclass=true in profile "functional-012915"
	I1002 20:48:28.447553  103439 addons.go:238] Setting addon storage-provisioner=true in "functional-012915"
	I1002 20:48:28.447556  103439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-012915"
	I1002 20:48:28.447587  103439 host.go:66] Checking if "functional-012915" exists ...
	I1002 20:48:28.447847  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.447963  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.456904  103439 out.go:179] * Verifying Kubernetes components...
	I1002 20:48:28.458283  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:28.468928  103439 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.469101  103439 kapi.go:59] client config for functional-012915: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:48:28.469369  103439 addons.go:238] Setting addon default-storageclass=true in "functional-012915"
	I1002 20:48:28.469428  103439 host.go:66] Checking if "functional-012915" exists ...
	I1002 20:48:28.469783  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.469862  103439 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:48:28.471474  103439 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:28.471499  103439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:48:28.471557  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:28.496201  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:28.497174  103439 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.497196  103439 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:48:28.497262  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:28.518487  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:28.562123  103439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:48:28.575162  103439 node_ready.go:35] waiting up to 6m0s for node "functional-012915" to be "Ready" ...
	I1002 20:48:28.575316  103439 type.go:168] "Request Body" body=""
	I1002 20:48:28.575388  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:28.575672  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:28.608117  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:28.625656  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.661232  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.663490  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.663556  103439 retry.go:31] will retry after 361.771557ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.679351  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.679399  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.679416  103439 retry.go:31] will retry after 152.242547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.831815  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.883542  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.883591  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.883623  103439 retry.go:31] will retry after 207.681653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.025956  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.075113  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.076262  103439 type.go:168] "Request Body" body=""
	I1002 20:48:29.076342  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:29.076623  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:29.077506  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.077533  103439 retry.go:31] will retry after 323.914971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.091861  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:29.140394  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.142831  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.142876  103439 retry.go:31] will retry after 594.351303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.402253  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.454867  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.454924  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.454957  103439 retry.go:31] will retry after 314.476021ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.576263  103439 type.go:168] "Request Body" body=""
	I1002 20:48:29.576411  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:29.576803  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:29.738004  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:29.769756  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.788694  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.790987  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.791025  103439 retry.go:31] will retry after 1.197724944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.822453  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.822502  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.822528  103439 retry.go:31] will retry after 662.931836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.075955  103439 type.go:168] "Request Body" body=""
	I1002 20:48:30.076032  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:30.076409  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:30.485957  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:30.538516  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:30.538557  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.538578  103439 retry.go:31] will retry after 1.629504367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.575804  103439 type.go:168] "Request Body" body=""
	I1002 20:48:30.575880  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:30.576213  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:30.576271  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:30.989890  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:31.043558  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:31.043619  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.043637  103439 retry.go:31] will retry after 801.444903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.075880  103439 type.go:168] "Request Body" body=""
	I1002 20:48:31.075960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:31.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:31.576114  103439 type.go:168] "Request Body" body=""
	I1002 20:48:31.576220  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:31.576603  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:31.845951  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:31.899339  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:31.899391  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.899410  103439 retry.go:31] will retry after 2.181457366s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.075827  103439 type.go:168] "Request Body" body=""
	I1002 20:48:32.075931  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:32.076334  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:32.168648  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:32.220495  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:32.220539  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.220557  103439 retry.go:31] will retry after 1.373851602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.576076  103439 type.go:168] "Request Body" body=""
	I1002 20:48:32.576161  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:32.576533  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:32.576599  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:33.076393  103439 type.go:168] "Request Body" body=""
	I1002 20:48:33.076488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:33.076861  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:33.575655  103439 type.go:168] "Request Body" body=""
	I1002 20:48:33.575875  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:33.576337  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:33.595591  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:33.646012  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:33.648297  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:33.648332  103439 retry.go:31] will retry after 3.090030694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.075896  103439 type.go:168] "Request Body" body=""
	I1002 20:48:34.075981  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:34.076263  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:34.081465  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:34.133647  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:34.133724  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.133770  103439 retry.go:31] will retry after 3.497111827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.576313  103439 type.go:168] "Request Body" body=""
	I1002 20:48:34.576409  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:34.576832  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:34.576893  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:35.075636  103439 type.go:168] "Request Body" body=""
	I1002 20:48:35.075732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:35.076135  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:35.575728  103439 type.go:168] "Request Body" body=""
	I1002 20:48:35.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:35.576239  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.076110  103439 type.go:168] "Request Body" body=""
	I1002 20:48:36.076196  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:36.076574  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.575482  103439 type.go:168] "Request Body" body=""
	I1002 20:48:36.575578  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:36.575974  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.739297  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:36.791716  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:36.791786  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:36.791808  103439 retry.go:31] will retry after 4.619526112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:37.076288  103439 type.go:168] "Request Body" body=""
	I1002 20:48:37.076368  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:37.076721  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:37.076814  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:37.576414  103439 type.go:168] "Request Body" body=""
	I1002 20:48:37.576492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:37.576867  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:37.632068  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:37.685537  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:37.685582  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:37.685612  103439 retry.go:31] will retry after 3.179037423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:38.076157  103439 type.go:168] "Request Body" body=""
	I1002 20:48:38.076230  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:38.076633  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:38.576327  103439 type.go:168] "Request Body" body=""
	I1002 20:48:38.576425  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:38.576797  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:39.075409  103439 type.go:168] "Request Body" body=""
	I1002 20:48:39.075492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:39.075858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:39.575455  103439 type.go:168] "Request Body" body=""
	I1002 20:48:39.575567  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:39.575934  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:39.576000  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:40.075790  103439 type.go:168] "Request Body" body=""
	I1002 20:48:40.075873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:40.076280  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:40.575900  103439 type.go:168] "Request Body" body=""
	I1002 20:48:40.575982  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:40.576339  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:40.865793  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:40.922102  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:40.922154  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:40.922173  103439 retry.go:31] will retry after 8.017978865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.075452  103439 type.go:168] "Request Body" body=""
	I1002 20:48:41.075541  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:41.075959  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:41.412402  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:41.462892  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:41.465283  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.465317  103439 retry.go:31] will retry after 6.722422885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.575519  103439 type.go:168] "Request Body" body=""
	I1002 20:48:41.575606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:41.575978  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:41.576042  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:42.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:48:42.075773  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:42.076256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:42.575731  103439 type.go:168] "Request Body" body=""
	I1002 20:48:42.575835  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:42.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:43.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:48:43.076025  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:43.076442  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:43.576156  103439 type.go:168] "Request Body" body=""
	I1002 20:48:43.576250  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:43.576635  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:43.576711  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:44.076306  103439 type.go:168] "Request Body" body=""
	I1002 20:48:44.076398  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:44.076835  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:44.575484  103439 type.go:168] "Request Body" body=""
	I1002 20:48:44.575566  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:44.575930  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:45.075679  103439 type.go:168] "Request Body" body=""
	I1002 20:48:45.075780  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:45.076197  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:45.575843  103439 type.go:168] "Request Body" body=""
	I1002 20:48:45.575922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:45.576287  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:46.075882  103439 type.go:168] "Request Body" body=""
	I1002 20:48:46.075956  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:46.076307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:46.076367  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:46.576093  103439 type.go:168] "Request Body" body=""
	I1002 20:48:46.576194  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:46.576549  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:47.076247  103439 type.go:168] "Request Body" body=""
	I1002 20:48:47.076328  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:47.076667  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:47.576364  103439 type.go:168] "Request Body" body=""
	I1002 20:48:47.576474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:47.576869  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:48.075470  103439 type.go:168] "Request Body" body=""
	I1002 20:48:48.075556  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:48.075935  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:48.188198  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:48.240819  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:48.240876  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.240960  103439 retry.go:31] will retry after 5.203774684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.575470  103439 type.go:168] "Request Body" body=""
	I1002 20:48:48.575548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:48.575916  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:48.575985  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:48.940390  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:48.992334  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:48.994935  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.994965  103439 retry.go:31] will retry after 7.700365391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:49.076327  103439 type.go:168] "Request Body" body=""
	I1002 20:48:49.076416  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:49.076830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:49.575454  103439 type.go:168] "Request Body" body=""
	I1002 20:48:49.575554  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:49.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:50.075711  103439 type.go:168] "Request Body" body=""
	I1002 20:48:50.075826  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:50.076249  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:50.575864  103439 type.go:168] "Request Body" body=""
	I1002 20:48:50.575961  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:50.576351  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:50.576415  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:51.076075  103439 type.go:168] "Request Body" body=""
	I1002 20:48:51.076176  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:51.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:51.575972  103439 type.go:168] "Request Body" body=""
	I1002 20:48:51.576054  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:51.576387  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:52.076055  103439 type.go:168] "Request Body" body=""
	I1002 20:48:52.076146  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:52.076526  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:52.576203  103439 type.go:168] "Request Body" body=""
	I1002 20:48:52.576289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:52.576688  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:52.576771  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:53.076363  103439 type.go:168] "Request Body" body=""
	I1002 20:48:53.076444  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:53.076831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:53.445247  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:53.496043  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:53.498518  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:53.498561  103439 retry.go:31] will retry after 18.668445084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:53.575895  103439 type.go:168] "Request Body" body=""
	I1002 20:48:53.575974  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:53.576330  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:54.076074  103439 type.go:168] "Request Body" body=""
	I1002 20:48:54.076158  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:54.076568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:54.576230  103439 type.go:168] "Request Body" body=""
	I1002 20:48:54.576305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:54.576631  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:55.075724  103439 type.go:168] "Request Body" body=""
	I1002 20:48:55.075820  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:55.076207  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:55.076287  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:55.575835  103439 type.go:168] "Request Body" body=""
	I1002 20:48:55.575924  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:55.576280  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.075883  103439 type.go:168] "Request Body" body=""
	I1002 20:48:56.075963  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:56.076361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.576037  103439 type.go:168] "Request Body" body=""
	I1002 20:48:56.576120  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:56.576513  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.695837  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:56.749495  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:56.749534  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:56.749553  103439 retry.go:31] will retry after 17.757887541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:57.076066  103439 type.go:168] "Request Body" body=""
	I1002 20:48:57.076153  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:57.076611  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:57.076679  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:57.576325  103439 type.go:168] "Request Body" body=""
	I1002 20:48:57.576416  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:57.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:58.076237  103439 type.go:168] "Request Body" body=""
	I1002 20:48:58.076314  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:58.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:58.575412  103439 type.go:168] "Request Body" body=""
	I1002 20:48:58.575504  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:58.575865  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:59.075437  103439 type.go:168] "Request Body" body=""
	I1002 20:48:59.075528  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:59.075976  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:59.575438  103439 type.go:168] "Request Body" body=""
	I1002 20:48:59.575539  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:59.575952  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:59.576014  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:00.075849  103439 type.go:168] "Request Body" body=""
	I1002 20:49:00.075928  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:00.076266  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:00.575974  103439 type.go:168] "Request Body" body=""
	I1002 20:49:00.576072  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:00.576461  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:01.076180  103439 type.go:168] "Request Body" body=""
	I1002 20:49:01.076280  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:01.076643  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:01.576370  103439 type.go:168] "Request Body" body=""
	I1002 20:49:01.576466  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:01.576896  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:01.576970  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:02.075515  103439 type.go:168] "Request Body" body=""
	I1002 20:49:02.075606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:02.075985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:02.575600  103439 type.go:168] "Request Body" body=""
	I1002 20:49:02.575686  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:02.576112  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:03.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:49:03.075769  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:03.076121  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:03.575712  103439 type.go:168] "Request Body" body=""
	I1002 20:49:03.575846  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:03.576202  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:04.075891  103439 type.go:168] "Request Body" body=""
	I1002 20:49:04.075970  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:04.076322  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:04.076381  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:04.576087  103439 type.go:168] "Request Body" body=""
	I1002 20:49:04.576249  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:04.576616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:05.075403  103439 type.go:168] "Request Body" body=""
	I1002 20:49:05.075481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:05.075839  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:05.575464  103439 type.go:168] "Request Body" body=""
	I1002 20:49:05.575572  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:05.575972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:06.075594  103439 type.go:168] "Request Body" body=""
	I1002 20:49:06.075677  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:06.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:06.575658  103439 type.go:168] "Request Body" body=""
	I1002 20:49:06.575767  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:06.576141  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:06.576200  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:07.075781  103439 type.go:168] "Request Body" body=""
	I1002 20:49:07.075865  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:07.076245  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:07.575885  103439 type.go:168] "Request Body" body=""
	I1002 20:49:07.575974  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:07.576361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:08.075998  103439 type.go:168] "Request Body" body=""
	I1002 20:49:08.076084  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:08.076429  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:08.576307  103439 type.go:168] "Request Body" body=""
	I1002 20:49:08.576413  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:08.576814  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:08.576876  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:09.075362  103439 type.go:168] "Request Body" body=""
	I1002 20:49:09.075437  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:09.075799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:09.575387  103439 type.go:168] "Request Body" body=""
	I1002 20:49:09.575482  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:09.575850  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:10.075783  103439 type.go:168] "Request Body" body=""
	I1002 20:49:10.075869  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:10.076249  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:10.575831  103439 type.go:168] "Request Body" body=""
	I1002 20:49:10.575935  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:10.576353  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:11.076044  103439 type.go:168] "Request Body" body=""
	I1002 20:49:11.076133  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:11.076599  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:11.076668  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:11.576237  103439 type.go:168] "Request Body" body=""
	I1002 20:49:11.576331  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:11.576683  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:12.076335  103439 type.go:168] "Request Body" body=""
	I1002 20:49:12.076430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:12.076838  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:12.168044  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:12.220925  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:12.220980  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:12.221004  103439 retry.go:31] will retry after 18.69466529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:12.575446  103439 type.go:168] "Request Body" body=""
	I1002 20:49:12.575535  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:12.575932  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:13.075529  103439 type.go:168] "Request Body" body=""
	I1002 20:49:13.075604  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:13.075957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:13.575562  103439 type.go:168] "Request Body" body=""
	I1002 20:49:13.575652  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:13.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:13.576135  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:14.075639  103439 type.go:168] "Request Body" body=""
	I1002 20:49:14.075761  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:14.076134  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:14.507714  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:49:14.560377  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:14.560441  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:14.560472  103439 retry.go:31] will retry after 29.222161527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:14.575630  103439 type.go:168] "Request Body" body=""
	I1002 20:49:14.575695  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:14.575976  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:15.075906  103439 type.go:168] "Request Body" body=""
	I1002 20:49:15.075982  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:15.076361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:15.575992  103439 type.go:168] "Request Body" body=""
	I1002 20:49:15.576071  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:15.576414  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:15.576474  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:16.076107  103439 type.go:168] "Request Body" body=""
	I1002 20:49:16.076212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:16.076649  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:16.576307  103439 type.go:168] "Request Body" body=""
	I1002 20:49:16.576391  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:16.576715  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:17.076322  103439 type.go:168] "Request Body" body=""
	I1002 20:49:17.076405  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:17.076824  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:17.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:49:17.575561  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:17.575924  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:18.076218  103439 type.go:168] "Request Body" body=""
	I1002 20:49:18.076306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:18.076654  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:18.076715  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:18.576306  103439 type.go:168] "Request Body" body=""
	I1002 20:49:18.576386  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:18.576768  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:19.075340  103439 type.go:168] "Request Body" body=""
	I1002 20:49:19.075428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:19.075806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:19.575441  103439 type.go:168] "Request Body" body=""
	I1002 20:49:19.575527  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:19.575944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:20.075821  103439 type.go:168] "Request Body" body=""
	I1002 20:49:20.075922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:20.076321  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:20.575880  103439 type.go:168] "Request Body" body=""
	I1002 20:49:20.575960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:20.576302  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:20.576377  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:21.075989  103439 type.go:168] "Request Body" body=""
	I1002 20:49:21.076074  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:21.076448  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:21.576110  103439 type.go:168] "Request Body" body=""
	I1002 20:49:21.576185  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:21.576542  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:22.076165  103439 type.go:168] "Request Body" body=""
	I1002 20:49:22.076244  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:22.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:22.576228  103439 type.go:168] "Request Body" body=""
	I1002 20:49:22.576309  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:22.576640  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:22.576699  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:23.076279  103439 type.go:168] "Request Body" body=""
	I1002 20:49:23.076364  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:23.076694  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:23.576332  103439 type.go:168] "Request Body" body=""
	I1002 20:49:23.576406  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:23.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:24.075380  103439 type.go:168] "Request Body" body=""
	I1002 20:49:24.075461  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:24.075821  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:24.575420  103439 type.go:168] "Request Body" body=""
	I1002 20:49:24.575507  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:24.575886  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:25.075625  103439 type.go:168] "Request Body" body=""
	I1002 20:49:25.075705  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:25.076135  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:25.076213  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:25.575710  103439 type.go:168] "Request Body" body=""
	I1002 20:49:25.575827  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:25.576189  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:26.075726  103439 type.go:168] "Request Body" body=""
	I1002 20:49:26.075816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:26.076175  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:26.575753  103439 type.go:168] "Request Body" body=""
	I1002 20:49:26.575829  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:26.576180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:27.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:49:27.075799  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:27.076197  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:27.076268  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:27.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:49:27.575897  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:27.576231  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:28.075845  103439 type.go:168] "Request Body" body=""
	I1002 20:49:28.075929  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:28.076311  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:28.576131  103439 type.go:168] "Request Body" body=""
	I1002 20:49:28.576205  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:28.576567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:29.076227  103439 type.go:168] "Request Body" body=""
	I1002 20:49:29.076317  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:29.076686  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:29.076777  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:29.576355  103439 type.go:168] "Request Body" body=""
	I1002 20:49:29.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:29.576786  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.075478  103439 type.go:168] "Request Body" body=""
	I1002 20:49:30.075569  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:30.075933  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.575478  103439 type.go:168] "Request Body" body=""
	I1002 20:49:30.575586  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:30.575938  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.916459  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:30.966432  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:30.968861  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:30.968901  103439 retry.go:31] will retry after 21.359119468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:31.076302  103439 type.go:168] "Request Body" body=""
	I1002 20:49:31.076392  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:31.076792  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:31.076872  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:31.575376  103439 type.go:168] "Request Body" body=""
	I1002 20:49:31.575450  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:31.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:32.075414  103439 type.go:168] "Request Body" body=""
	I1002 20:49:32.075517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:32.075902  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:32.575509  103439 type.go:168] "Request Body" body=""
	I1002 20:49:32.575602  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:32.575991  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:33.075769  103439 type.go:168] "Request Body" body=""
	I1002 20:49:33.075863  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:33.076201  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:33.576065  103439 type.go:168] "Request Body" body=""
	I1002 20:49:33.576159  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:33.576529  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:33.576605  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:34.076395  103439 type.go:168] "Request Body" body=""
	I1002 20:49:34.076474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:34.076849  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:34.575597  103439 type.go:168] "Request Body" body=""
	I1002 20:49:34.575671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:34.576060  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:35.075844  103439 type.go:168] "Request Body" body=""
	I1002 20:49:35.075929  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:35.076305  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:35.576145  103439 type.go:168] "Request Body" body=""
	I1002 20:49:35.576226  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:35.576568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:36.075334  103439 type.go:168] "Request Body" body=""
	I1002 20:49:36.075411  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:36.075806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:36.075863  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:36.575603  103439 type.go:168] "Request Body" body=""
	I1002 20:49:36.575675  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:36.576026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:37.075815  103439 type.go:168] "Request Body" body=""
	I1002 20:49:37.075895  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:37.076296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:37.576133  103439 type.go:168] "Request Body" body=""
	I1002 20:49:37.576211  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:37.576551  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:38.076393  103439 type.go:168] "Request Body" body=""
	I1002 20:49:38.076464  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:38.076847  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:38.076908  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:38.575667  103439 type.go:168] "Request Body" body=""
	I1002 20:49:38.575774  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:38.576122  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:39.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:49:39.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:39.076312  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:39.576198  103439 type.go:168] "Request Body" body=""
	I1002 20:49:39.576287  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:39.576659  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:40.075460  103439 type.go:168] "Request Body" body=""
	I1002 20:49:40.075544  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:40.075914  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:40.575679  103439 type.go:168] "Request Body" body=""
	I1002 20:49:40.575789  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:40.576134  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:40.576211  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:41.076023  103439 type.go:168] "Request Body" body=""
	I1002 20:49:41.076108  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:41.076444  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:41.576264  103439 type.go:168] "Request Body" body=""
	I1002 20:49:41.576340  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:41.576673  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:42.075461  103439 type.go:168] "Request Body" body=""
	I1002 20:49:42.075562  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:42.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:42.575679  103439 type.go:168] "Request Body" body=""
	I1002 20:49:42.575775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:42.576136  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:43.075963  103439 type.go:168] "Request Body" body=""
	I1002 20:49:43.076038  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:43.076375  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:43.076439  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:43.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:49:43.576333  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:43.576694  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:43.782991  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:49:43.835836  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:43.835901  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:43.835926  103439 retry.go:31] will retry after 22.850861202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:44.076251  103439 type.go:168] "Request Body" body=""
	I1002 20:49:44.076330  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:44.076662  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:44.576378  103439 type.go:168] "Request Body" body=""
	I1002 20:49:44.576459  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:44.576851  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:45.075622  103439 type.go:168] "Request Body" body=""
	I1002 20:49:45.075712  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:45.076088  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:45.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:49:45.575872  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:45.576194  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:45.576263  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:46.075799  103439 type.go:168] "Request Body" body=""
	I1002 20:49:46.075878  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:46.076248  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:46.576106  103439 type.go:168] "Request Body" body=""
	I1002 20:49:46.576212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:46.576565  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:47.075364  103439 type.go:168] "Request Body" body=""
	I1002 20:49:47.075444  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:47.075796  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:47.575534  103439 type.go:168] "Request Body" body=""
	I1002 20:49:47.575641  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:47.576000  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:48.075765  103439 type.go:168] "Request Body" body=""
	I1002 20:49:48.075841  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:48.076173  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:48.076233  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:48.576031  103439 type.go:168] "Request Body" body=""
	I1002 20:49:48.576136  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:48.576523  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:49.076388  103439 type.go:168] "Request Body" body=""
	I1002 20:49:49.076470  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:49.076836  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:49.575635  103439 type.go:168] "Request Body" body=""
	I1002 20:49:49.575728  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:49.576118  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:50.075933  103439 type.go:168] "Request Body" body=""
	I1002 20:49:50.076012  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:50.076363  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:50.076472  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:50.576327  103439 type.go:168] "Request Body" body=""
	I1002 20:49:50.576425  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:50.576803  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:51.075548  103439 type.go:168] "Request Body" body=""
	I1002 20:49:51.075627  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:51.075982  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:51.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:49:51.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:51.576150  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:52.075977  103439 type.go:168] "Request Body" body=""
	I1002 20:49:52.076055  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:52.076435  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:52.076515  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:52.328832  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:52.382480  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:52.382546  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:52.382704  103439 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:49:52.575971  103439 type.go:168] "Request Body" body=""
	I1002 20:49:52.576051  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:52.576411  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:53.076086  103439 type.go:168] "Request Body" body=""
	I1002 20:49:53.076192  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:53.076567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:53.576218  103439 type.go:168] "Request Body" body=""
	I1002 20:49:53.576298  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:53.576641  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:54.076333  103439 type.go:168] "Request Body" body=""
	I1002 20:49:54.076427  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:54.076837  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:54.076901  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:54.575348  103439 type.go:168] "Request Body" body=""
	I1002 20:49:54.575429  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:54.575793  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:55.075831  103439 type.go:168] "Request Body" body=""
	I1002 20:49:55.075927  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:55.076284  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:55.575878  103439 type.go:168] "Request Body" body=""
	I1002 20:49:55.575952  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:55.576307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:56.075954  103439 type.go:168] "Request Body" body=""
	I1002 20:49:56.076056  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:56.076429  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:56.576071  103439 type.go:168] "Request Body" body=""
	I1002 20:49:56.576174  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:56.576511  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:56.576569  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:57.076167  103439 type.go:168] "Request Body" body=""
	I1002 20:49:57.076292  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:57.076654  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:57.576317  103439 type.go:168] "Request Body" body=""
	I1002 20:49:57.576399  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:57.576791  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:58.075329  103439 type.go:168] "Request Body" body=""
	I1002 20:49:58.075426  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:58.075862  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:58.575784  103439 type.go:168] "Request Body" body=""
	I1002 20:49:58.575888  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:58.576288  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:59.075625  103439 type.go:168] "Request Body" body=""
	I1002 20:49:59.075696  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:59.076065  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:59.076136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:59.575793  103439 type.go:168] "Request Body" body=""
	I1002 20:49:59.575892  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:59.576323  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:00.076176  103439 type.go:168] "Request Body" body=""
	I1002 20:50:00.076256  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:00.076616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:00.575379  103439 type.go:168] "Request Body" body=""
	I1002 20:50:00.575456  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:00.575877  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:01.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:50:01.075760  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:01.076169  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:01.076232  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:01.576062  103439 type.go:168] "Request Body" body=""
	I1002 20:50:01.576155  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:01.576520  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:02.076405  103439 type.go:168] "Request Body" body=""
	I1002 20:50:02.076489  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:02.076943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:02.575716  103439 type.go:168] "Request Body" body=""
	I1002 20:50:02.575817  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:02.576177  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:03.076017  103439 type.go:168] "Request Body" body=""
	I1002 20:50:03.076108  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:03.076545  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:03.076613  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:03.575378  103439 type.go:168] "Request Body" body=""
	I1002 20:50:03.575465  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:03.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:04.075550  103439 type.go:168] "Request Body" body=""
	I1002 20:50:04.075623  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:04.076010  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:04.575808  103439 type.go:168] "Request Body" body=""
	I1002 20:50:04.575945  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:04.576301  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:05.076207  103439 type.go:168] "Request Body" body=""
	I1002 20:50:05.076281  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:05.076634  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:05.076700  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:05.575445  103439 type.go:168] "Request Body" body=""
	I1002 20:50:05.575527  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:05.575953  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.075700  103439 type.go:168] "Request Body" body=""
	I1002 20:50:06.075799  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:06.076172  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.575978  103439 type.go:168] "Request Body" body=""
	I1002 20:50:06.576053  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:06.576423  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.687689  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:50:06.737429  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:50:06.739791  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:50:06.739905  103439 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:50:06.742850  103439 out.go:179] * Enabled addons: 
	I1002 20:50:06.744531  103439 addons.go:514] duration metric: took 1m38.297120179s for enable addons: enabled=[]
	I1002 20:50:07.076348  103439 type.go:168] "Request Body" body=""
	I1002 20:50:07.076424  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:07.076810  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:07.076887  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:07.575585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:07.575664  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:07.576013  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:08.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:50:08.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:08.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:08.576074  103439 type.go:168] "Request Body" body=""
	I1002 20:50:08.576184  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:08.576885  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:09.075637  103439 type.go:168] "Request Body" body=""
	I1002 20:50:09.075726  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:09.076126  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:09.575856  103439 type.go:168] "Request Body" body=""
	I1002 20:50:09.575938  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:09.576289  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:09.576365  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:10.076213  103439 type.go:168] "Request Body" body=""
	I1002 20:50:10.076289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:10.076668  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:10.575384  103439 type.go:168] "Request Body" body=""
	I1002 20:50:10.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:10.575843  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:11.075634  103439 type.go:168] "Request Body" body=""
	I1002 20:50:11.075712  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:11.076109  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:11.575835  103439 type.go:168] "Request Body" body=""
	I1002 20:50:11.575921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:11.576276  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:12.076113  103439 type.go:168] "Request Body" body=""
	I1002 20:50:12.076186  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:12.076607  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:12.076677  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:12.575967  103439 type.go:168] "Request Body" body=""
	I1002 20:50:12.576054  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:12.576464  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:13.076341  103439 type.go:168] "Request Body" body=""
	I1002 20:50:13.076412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:13.076780  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:13.575533  103439 type.go:168] "Request Body" body=""
	I1002 20:50:13.575606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:13.576033  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:14.075814  103439 type.go:168] "Request Body" body=""
	I1002 20:50:14.075900  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:14.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:14.576194  103439 type.go:168] "Request Body" body=""
	I1002 20:50:14.576290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:14.576629  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:14.576695  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:15.075361  103439 type.go:168] "Request Body" body=""
	I1002 20:50:15.075442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:15.075840  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:15.575616  103439 type.go:168] "Request Body" body=""
	I1002 20:50:15.575700  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:15.576070  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:16.075838  103439 type.go:168] "Request Body" body=""
	I1002 20:50:16.075936  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:16.076365  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:16.576255  103439 type.go:168] "Request Body" body=""
	I1002 20:50:16.576335  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:16.576673  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:16.576732  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:17.075466  103439 type.go:168] "Request Body" body=""
	I1002 20:50:17.075545  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:17.075956  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:17.575727  103439 type.go:168] "Request Body" body=""
	I1002 20:50:17.575832  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:17.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:18.076032  103439 type.go:168] "Request Body" body=""
	I1002 20:50:18.076123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:18.076487  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:18.576201  103439 type.go:168] "Request Body" body=""
	I1002 20:50:18.576280  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:18.576630  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:19.075359  103439 type.go:168] "Request Body" body=""
	I1002 20:50:19.075436  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:19.075879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:19.075940  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:19.575662  103439 type.go:168] "Request Body" body=""
	I1002 20:50:19.575765  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:19.576112  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:20.075942  103439 type.go:168] "Request Body" body=""
	I1002 20:50:20.076022  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:20.076365  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:20.576167  103439 type.go:168] "Request Body" body=""
	I1002 20:50:20.576281  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:20.576638  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:21.075449  103439 type.go:168] "Request Body" body=""
	I1002 20:50:21.075533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:21.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:21.076012  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:21.575710  103439 type.go:168] "Request Body" body=""
	I1002 20:50:21.575816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:21.576163  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:22.076027  103439 type.go:168] "Request Body" body=""
	I1002 20:50:22.076112  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:22.076486  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:22.576328  103439 type.go:168] "Request Body" body=""
	I1002 20:50:22.576406  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:22.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:23.075575  103439 type.go:168] "Request Body" body=""
	I1002 20:50:23.075653  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:23.076015  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:23.076102  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:23.575919  103439 type.go:168] "Request Body" body=""
	I1002 20:50:23.576001  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:23.576441  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:24.076301  103439 type.go:168] "Request Body" body=""
	I1002 20:50:24.076385  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:24.076732  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:24.575497  103439 type.go:168] "Request Body" body=""
	I1002 20:50:24.575575  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:24.575977  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:25.075906  103439 type.go:168] "Request Body" body=""
	I1002 20:50:25.076002  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:25.076372  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:25.076430  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:25.575772  103439 type.go:168] "Request Body" body=""
	I1002 20:50:25.575847  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:25.576205  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:26.075989  103439 type.go:168] "Request Body" body=""
	I1002 20:50:26.076058  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:26.076440  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:26.576301  103439 type.go:168] "Request Body" body=""
	I1002 20:50:26.576389  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:26.576734  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:27.075548  103439 type.go:168] "Request Body" body=""
	I1002 20:50:27.075630  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:27.076087  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:27.575871  103439 type.go:168] "Request Body" body=""
	I1002 20:50:27.575960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:27.576295  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:27.576366  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:28.075983  103439 type.go:168] "Request Body" body=""
	I1002 20:50:28.076395  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:28.076839  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:28.575729  103439 type.go:168] "Request Body" body=""
	I1002 20:50:28.575838  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:28.576242  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:29.075826  103439 type.go:168] "Request Body" body=""
	I1002 20:50:29.075899  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:29.076269  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:29.576058  103439 type.go:168] "Request Body" body=""
	I1002 20:50:29.576161  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:29.576557  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:29.576620  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:30.075394  103439 type.go:168] "Request Body" body=""
	I1002 20:50:30.075476  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:30.075848  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:30.575440  103439 type.go:168] "Request Body" body=""
	I1002 20:50:30.575513  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:30.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:31.075504  103439 type.go:168] "Request Body" body=""
	I1002 20:50:31.075583  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:31.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:31.575533  103439 type.go:168] "Request Body" body=""
	I1002 20:50:31.575614  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:31.576035  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:32.075585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:32.075666  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:32.076026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:32.076094  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:32.575632  103439 type.go:168] "Request Body" body=""
	I1002 20:50:32.575709  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:32.576117  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:33.075652  103439 type.go:168] "Request Body" body=""
	I1002 20:50:33.075731  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:33.076100  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:33.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:50:33.575758  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:33.576149  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:34.075715  103439 type.go:168] "Request Body" body=""
	I1002 20:50:34.075810  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:34.076153  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:34.076216  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:34.575779  103439 type.go:168] "Request Body" body=""
	I1002 20:50:34.575858  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:34.576247  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:35.076148  103439 type.go:168] "Request Body" body=""
	I1002 20:50:35.076233  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:35.076598  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:35.576262  103439 type.go:168] "Request Body" body=""
	I1002 20:50:35.576347  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:35.576802  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:36.075374  103439 type.go:168] "Request Body" body=""
	I1002 20:50:36.075454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:36.075824  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:36.575422  103439 type.go:168] "Request Body" body=""
	I1002 20:50:36.575496  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:36.575848  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:36.575906  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:37.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:50:37.075521  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:37.075904  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:37.575460  103439 type.go:168] "Request Body" body=""
	I1002 20:50:37.575565  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:37.575952  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:38.075497  103439 type.go:168] "Request Body" body=""
	I1002 20:50:38.075579  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:38.075949  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:38.575843  103439 type.go:168] "Request Body" body=""
	I1002 20:50:38.575923  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:38.576292  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:38.576357  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:39.075970  103439 type.go:168] "Request Body" body=""
	I1002 20:50:39.076045  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:39.076459  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:39.576183  103439 type.go:168] "Request Body" body=""
	I1002 20:50:39.576276  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:39.576637  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:40.075394  103439 type.go:168] "Request Body" body=""
	I1002 20:50:40.075469  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:40.075856  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:40.575390  103439 type.go:168] "Request Body" body=""
	I1002 20:50:40.575465  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:40.575823  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:41.076191  103439 type.go:168] "Request Body" body=""
	I1002 20:50:41.076274  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:41.076628  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:41.076694  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:41.576291  103439 type.go:168] "Request Body" body=""
	I1002 20:50:41.576370  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:41.576770  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:42.076380  103439 type.go:168] "Request Body" body=""
	I1002 20:50:42.076481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:42.076834  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:42.575420  103439 type.go:168] "Request Body" body=""
	I1002 20:50:42.575496  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:42.575951  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:43.075513  103439 type.go:168] "Request Body" body=""
	I1002 20:50:43.075604  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:43.075967  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:43.575585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:43.575664  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:43.576070  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:43.576146  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:44.075681  103439 type.go:168] "Request Body" body=""
	I1002 20:50:44.075873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:44.076261  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:44.575868  103439 type.go:168] "Request Body" body=""
	I1002 20:50:44.575964  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:44.576327  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:45.076248  103439 type.go:168] "Request Body" body=""
	I1002 20:50:45.076357  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:45.076714  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:45.576035  103439 type.go:168] "Request Body" body=""
	I1002 20:50:45.576124  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:45.576501  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:45.576565  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:46.076153  103439 type.go:168] "Request Body" body=""
	I1002 20:50:46.076231  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:46.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:46.576261  103439 type.go:168] "Request Body" body=""
	I1002 20:50:46.576334  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:46.576706  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:47.076362  103439 type.go:168] "Request Body" body=""
	I1002 20:50:47.076446  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:47.076819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:47.575401  103439 type.go:168] "Request Body" body=""
	I1002 20:50:47.575474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:47.575854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:48.075429  103439 type.go:168] "Request Body" body=""
	I1002 20:50:48.075510  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:48.075856  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:48.075914  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:48.575411  103439 type.go:168] "Request Body" body=""
	I1002 20:50:48.575495  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:48.575887  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:49.075463  103439 type.go:168] "Request Body" body=""
	I1002 20:50:49.075543  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:49.075937  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:49.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:50:49.575579  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:49.575950  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:50.075789  103439 type.go:168] "Request Body" body=""
	I1002 20:50:50.075872  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:50.076231  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:50.076332  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:50.575815  103439 type.go:168] "Request Body" body=""
	I1002 20:50:50.575914  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:50.576296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:51.075877  103439 type.go:168] "Request Body" body=""
	I1002 20:50:51.075952  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:51.076337  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:51.576100  103439 type.go:168] "Request Body" body=""
	I1002 20:50:51.576202  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:51.576539  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:52.076187  103439 type.go:168] "Request Body" body=""
	I1002 20:50:52.076262  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:52.076592  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:52.076677  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:52.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:50:52.576403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:52.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:53.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:50:53.075460  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:53.075819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:53.575411  103439 type.go:168] "Request Body" body=""
	I1002 20:50:53.575520  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:53.575927  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:54.075511  103439 type.go:168] "Request Body" body=""
	I1002 20:50:54.075600  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:54.075971  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:54.575550  103439 type.go:168] "Request Body" body=""
	I1002 20:50:54.575643  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:54.576052  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:54.576136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:55.075833  103439 type.go:168] "Request Body" body=""
	I1002 20:50:55.075908  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:55.076313  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:55.575945  103439 type.go:168] "Request Body" body=""
	I1002 20:50:55.576033  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:55.576428  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:56.076124  103439 type.go:168] "Request Body" body=""
	I1002 20:50:56.076205  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:56.076588  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:56.576221  103439 type.go:168] "Request Body" body=""
	I1002 20:50:56.576325  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:56.576662  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:56.576724  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:57.076306  103439 type.go:168] "Request Body" body=""
	I1002 20:50:57.076386  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:57.076786  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:57.575325  103439 type.go:168] "Request Body" body=""
	I1002 20:50:57.575412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:57.575787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:58.076352  103439 type.go:168] "Request Body" body=""
	I1002 20:50:58.076422  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:58.076854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:58.575806  103439 type.go:168] "Request Body" body=""
	I1002 20:50:58.575901  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:58.576260  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:59.075853  103439 type.go:168] "Request Body" body=""
	I1002 20:50:59.075934  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:59.076321  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:59.076383  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:59.575967  103439 type.go:168] "Request Body" body=""
	I1002 20:50:59.576070  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:59.576437  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:00.076247  103439 type.go:168] "Request Body" body=""
	I1002 20:51:00.076327  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:00.076671  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:00.576348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:00.576435  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:00.576826  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:01.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:51:01.075456  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:01.075840  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:01.575383  103439 type.go:168] "Request Body" body=""
	I1002 20:51:01.575471  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:01.575834  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:01.575909  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:02.075405  103439 type.go:168] "Request Body" body=""
	I1002 20:51:02.075486  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:02.075854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:02.575427  103439 type.go:168] "Request Body" body=""
	I1002 20:51:02.575517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:02.575932  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:03.075458  103439 type.go:168] "Request Body" body=""
	I1002 20:51:03.075534  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:03.075891  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:03.576314  103439 type.go:168] "Request Body" body=""
	I1002 20:51:03.576387  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:03.576727  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:03.576806  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:04.076341  103439 type.go:168] "Request Body" body=""
	I1002 20:51:04.076414  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:04.076789  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:04.575407  103439 type.go:168] "Request Body" body=""
	I1002 20:51:04.575488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:04.575830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:05.075787  103439 type.go:168] "Request Body" body=""
	I1002 20:51:05.075860  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:05.076258  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:05.575847  103439 type.go:168] "Request Body" body=""
	I1002 20:51:05.575921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:05.576283  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:06.075890  103439 type.go:168] "Request Body" body=""
	I1002 20:51:06.075964  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:06.076395  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:06.076456  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:06.575993  103439 type.go:168] "Request Body" body=""
	I1002 20:51:06.576075  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:06.576412  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:07.076071  103439 type.go:168] "Request Body" body=""
	I1002 20:51:07.076154  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:07.076593  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:07.576229  103439 type.go:168] "Request Body" body=""
	I1002 20:51:07.576309  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:07.576657  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:08.076385  103439 type.go:168] "Request Body" body=""
	I1002 20:51:08.076464  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:08.076893  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:08.076954  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:08.575699  103439 type.go:168] "Request Body" body=""
	I1002 20:51:08.575787  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:08.576128  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:09.075675  103439 type.go:168] "Request Body" body=""
	I1002 20:51:09.075764  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:09.076126  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:09.576325  103439 type.go:168] "Request Body" body=""
	I1002 20:51:09.576432  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:09.576808  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:10.075645  103439 type.go:168] "Request Body" body=""
	I1002 20:51:10.075730  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:10.076142  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:10.575721  103439 type.go:168] "Request Body" body=""
	I1002 20:51:10.575820  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:10.576241  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:10.576304  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:11.075870  103439 type.go:168] "Request Body" body=""
	I1002 20:51:11.075955  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:11.076373  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:11.576041  103439 type.go:168] "Request Body" body=""
	I1002 20:51:11.576140  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:11.576505  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:12.076251  103439 type.go:168] "Request Body" body=""
	I1002 20:51:12.076345  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:12.076705  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:12.576352  103439 type.go:168] "Request Body" body=""
	I1002 20:51:12.576428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:12.576813  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:12.576892  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:13.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:51:13.075526  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:13.075917  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:13.575550  103439 type.go:168] "Request Body" body=""
	I1002 20:51:13.575640  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:13.576048  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:14.075644  103439 type.go:168] "Request Body" body=""
	I1002 20:51:14.075715  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:14.076108  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:14.575664  103439 type.go:168] "Request Body" body=""
	I1002 20:51:14.575795  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:14.576210  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:15.076065  103439 type.go:168] "Request Body" body=""
	I1002 20:51:15.076151  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:15.076548  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:15.076609  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:15.576209  103439 type.go:168] "Request Body" body=""
	I1002 20:51:15.576290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:15.576658  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:16.076387  103439 type.go:168] "Request Body" body=""
	I1002 20:51:16.076472  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:16.076818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:16.575432  103439 type.go:168] "Request Body" body=""
	I1002 20:51:16.575509  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:16.575925  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:17.075499  103439 type.go:168] "Request Body" body=""
	I1002 20:51:17.075588  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:17.075953  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:17.575636  103439 type.go:168] "Request Body" body=""
	I1002 20:51:17.575717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:17.576139  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:17.576206  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:18.075726  103439 type.go:168] "Request Body" body=""
	I1002 20:51:18.075840  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:18.076170  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:18.576043  103439 type.go:168] "Request Body" body=""
	I1002 20:51:18.576134  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:18.576500  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:19.076156  103439 type.go:168] "Request Body" body=""
	I1002 20:51:19.076230  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:19.076608  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:19.576287  103439 type.go:168] "Request Body" body=""
	I1002 20:51:19.576370  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:19.576719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:19.576823  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:20.075605  103439 type.go:168] "Request Body" body=""
	I1002 20:51:20.075689  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:20.076064  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:20.575671  103439 type.go:168] "Request Body" body=""
	I1002 20:51:20.575771  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:20.576160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:21.075760  103439 type.go:168] "Request Body" body=""
	I1002 20:51:21.075844  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:21.076251  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:21.575856  103439 type.go:168] "Request Body" body=""
	I1002 20:51:21.575946  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:21.576277  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:22.075938  103439 type.go:168] "Request Body" body=""
	I1002 20:51:22.076020  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:22.076385  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:22.076458  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:22.576058  103439 type.go:168] "Request Body" body=""
	I1002 20:51:22.576150  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:22.576496  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:23.076164  103439 type.go:168] "Request Body" body=""
	I1002 20:51:23.076256  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:23.076616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:23.576268  103439 type.go:168] "Request Body" body=""
	I1002 20:51:23.576350  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:23.576704  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:24.076361  103439 type.go:168] "Request Body" body=""
	I1002 20:51:24.076448  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:24.076818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:24.076882  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:24.575376  103439 type.go:168] "Request Body" body=""
	I1002 20:51:24.575452  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:24.575842  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:25.075817  103439 type.go:168] "Request Body" body=""
	I1002 20:51:25.075926  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:25.076324  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:25.575895  103439 type.go:168] "Request Body" body=""
	I1002 20:51:25.575977  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:25.576326  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:26.076018  103439 type.go:168] "Request Body" body=""
	I1002 20:51:26.076112  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:26.076484  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:26.576139  103439 type.go:168] "Request Body" body=""
	I1002 20:51:26.576216  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:26.576529  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:26.576601  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:27.076219  103439 type.go:168] "Request Body" body=""
	I1002 20:51:27.076333  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:27.076702  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:27.576348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:27.576421  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:27.576775  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:28.075392  103439 type.go:168] "Request Body" body=""
	I1002 20:51:28.075490  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:28.075928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:28.575733  103439 type.go:168] "Request Body" body=""
	I1002 20:51:28.575828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:28.576180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:29.075796  103439 type.go:168] "Request Body" body=""
	I1002 20:51:29.075881  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:29.076267  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:29.076325  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:29.575904  103439 type.go:168] "Request Body" body=""
	I1002 20:51:29.575995  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:29.576458  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:30.076348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:30.076430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:30.076826  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:30.575400  103439 type.go:168] "Request Body" body=""
	I1002 20:51:30.575481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:30.575844  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:31.075477  103439 type.go:168] "Request Body" body=""
	I1002 20:51:31.075558  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:31.076018  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:31.575552  103439 type.go:168] "Request Body" body=""
	I1002 20:51:31.575626  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:31.575957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:31.576019  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:32.075567  103439 type.go:168] "Request Body" body=""
	I1002 20:51:32.075648  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:32.076000  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:32.575617  103439 type.go:168] "Request Body" body=""
	I1002 20:51:32.575691  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:32.576091  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:33.075777  103439 type.go:168] "Request Body" body=""
	I1002 20:51:33.075867  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:33.076312  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:33.575892  103439 type.go:168] "Request Body" body=""
	I1002 20:51:33.575966  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:33.576360  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:33.576436  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:34.075990  103439 type.go:168] "Request Body" body=""
	I1002 20:51:34.076064  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:34.076423  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:34.576156  103439 type.go:168] "Request Body" body=""
	I1002 20:51:34.576242  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:34.576614  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:35.075451  103439 type.go:168] "Request Body" body=""
	I1002 20:51:35.075544  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:35.075944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:35.575553  103439 type.go:168] "Request Body" body=""
	I1002 20:51:35.575632  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:35.575984  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:36.075611  103439 type.go:168] "Request Body" body=""
	I1002 20:51:36.075690  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:36.076097  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:36.076170  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:36.575781  103439 type.go:168] "Request Body" body=""
	I1002 20:51:36.575857  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:36.576209  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:37.075787  103439 type.go:168] "Request Body" body=""
	I1002 20:51:37.075868  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:37.076233  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:37.575919  103439 type.go:168] "Request Body" body=""
	I1002 20:51:37.576016  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:37.576386  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:38.076037  103439 type.go:168] "Request Body" body=""
	I1002 20:51:38.076126  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:38.076506  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:38.076573  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:38.576216  103439 type.go:168] "Request Body" body=""
	I1002 20:51:38.576315  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:38.576715  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:39.076566  103439 type.go:168] "Request Body" body=""
	I1002 20:51:39.076671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:39.077118  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:39.575701  103439 type.go:168] "Request Body" body=""
	I1002 20:51:39.575832  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:39.576184  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:40.076137  103439 type.go:168] "Request Body" body=""
	I1002 20:51:40.076214  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:40.076550  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:40.076615  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:40.576291  103439 type.go:168] "Request Body" body=""
	I1002 20:51:40.576390  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:40.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:41.075322  103439 type.go:168] "Request Body" body=""
	I1002 20:51:41.075403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:41.075780  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:41.575391  103439 type.go:168] "Request Body" body=""
	I1002 20:51:41.575470  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:41.575870  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:42.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:51:42.075545  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:42.075943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:42.575565  103439 type.go:168] "Request Body" body=""
	I1002 20:51:42.575660  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:42.576053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:42.576127  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:43.075648  103439 type.go:168] "Request Body" body=""
	I1002 20:51:43.075718  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:43.076099  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:43.575699  103439 type.go:168] "Request Body" body=""
	I1002 20:51:43.575814  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:43.576217  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:44.075869  103439 type.go:168] "Request Body" body=""
	I1002 20:51:44.075942  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:44.076297  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:44.575859  103439 type.go:168] "Request Body" body=""
	I1002 20:51:44.575949  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:44.576319  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:44.576388  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:45.076331  103439 type.go:168] "Request Body" body=""
	I1002 20:51:45.076413  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:45.076728  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:45.575369  103439 type.go:168] "Request Body" body=""
	I1002 20:51:45.575463  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:45.575833  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:46.075482  103439 type.go:168] "Request Body" body=""
	I1002 20:51:46.075561  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:46.075954  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:46.575542  103439 type.go:168] "Request Body" body=""
	I1002 20:51:46.575624  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:46.575972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:47.075530  103439 type.go:168] "Request Body" body=""
	I1002 20:51:47.075605  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:47.076010  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:47.076101  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:47.575610  103439 type.go:168] "Request Body" body=""
	I1002 20:51:47.575685  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:47.576069  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:48.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:51:48.075809  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:48.076160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:48.576035  103439 type.go:168] "Request Body" body=""
	I1002 20:51:48.576123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:48.576499  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:49.076190  103439 type.go:168] "Request Body" body=""
	I1002 20:51:49.076263  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:49.076621  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:49.076681  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:49.576270  103439 type.go:168] "Request Body" body=""
	I1002 20:51:49.576351  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:49.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:50.075539  103439 type.go:168] "Request Body" body=""
	I1002 20:51:50.075624  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:50.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:50.575631  103439 type.go:168] "Request Body" body=""
	I1002 20:51:50.575707  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:50.576114  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:51.075711  103439 type.go:168] "Request Body" body=""
	I1002 20:51:51.075818  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:51.076157  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:51.575814  103439 type.go:168] "Request Body" body=""
	I1002 20:51:51.575890  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:51.576235  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:51.576316  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:52.075820  103439 type.go:168] "Request Body" body=""
	I1002 20:51:52.075911  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:52.076272  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:52.575858  103439 type.go:168] "Request Body" body=""
	I1002 20:51:52.575932  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:52.576284  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:53.075878  103439 type.go:168] "Request Body" body=""
	I1002 20:51:53.075963  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:53.076342  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:53.576038  103439 type.go:168] "Request Body" body=""
	I1002 20:51:53.576123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:53.576491  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:53.576559  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:54.076212  103439 type.go:168] "Request Body" body=""
	I1002 20:51:54.076289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:54.076627  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:54.576310  103439 type.go:168] "Request Body" body=""
	I1002 20:51:54.576389  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:54.576719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:55.075503  103439 type.go:168] "Request Body" body=""
	I1002 20:51:55.075581  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:55.075972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:55.575557  103439 type.go:168] "Request Body" body=""
	I1002 20:51:55.575642  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:55.576018  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:56.075601  103439 type.go:168] "Request Body" body=""
	I1002 20:51:56.075683  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:56.076064  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:56.076141  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:56.575721  103439 type.go:168] "Request Body" body=""
	I1002 20:51:56.575815  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:56.576144  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:57.075712  103439 type.go:168] "Request Body" body=""
	I1002 20:51:57.075821  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:57.076181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:57.575767  103439 type.go:168] "Request Body" body=""
	I1002 20:51:57.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:57.576216  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:58.075841  103439 type.go:168] "Request Body" body=""
	I1002 20:51:58.075920  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:58.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:58.076367  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:58.576187  103439 type.go:168] "Request Body" body=""
	I1002 20:51:58.576265  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:58.576613  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:59.076311  103439 type.go:168] "Request Body" body=""
	I1002 20:51:59.076391  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:59.076790  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:59.576375  103439 type.go:168] "Request Body" body=""
	I1002 20:51:59.576454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:59.576812  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:00.075544  103439 type.go:168] "Request Body" body=""
	I1002 20:52:00.075629  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:00.075981  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:00.575537  103439 type.go:168] "Request Body" body=""
	I1002 20:52:00.575633  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:00.576003  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:00.576089  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:01.075618  103439 type.go:168] "Request Body" body=""
	I1002 20:52:01.075698  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:01.076058  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:01.575676  103439 type.go:168] "Request Body" body=""
	I1002 20:52:01.575782  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:01.576133  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:02.075714  103439 type.go:168] "Request Body" body=""
	I1002 20:52:02.075815  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:02.076186  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:02.575783  103439 type.go:168] "Request Body" body=""
	I1002 20:52:02.575871  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:02.576224  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:02.576299  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:03.075796  103439 type.go:168] "Request Body" body=""
	I1002 20:52:03.075881  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:03.076235  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:03.575826  103439 type.go:168] "Request Body" body=""
	I1002 20:52:03.575903  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:03.576282  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:04.075959  103439 type.go:168] "Request Body" body=""
	I1002 20:52:04.076039  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:04.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:04.576109  103439 type.go:168] "Request Body" body=""
	I1002 20:52:04.576183  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:04.576520  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:04.576584  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:05.075455  103439 type.go:168] "Request Body" body=""
	I1002 20:52:05.075532  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:05.075890  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:05.575433  103439 type.go:168] "Request Body" body=""
	I1002 20:52:05.575505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:05.575871  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:06.075440  103439 type.go:168] "Request Body" body=""
	I1002 20:52:06.075523  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:06.075827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:06.575497  103439 type.go:168] "Request Body" body=""
	I1002 20:52:06.575590  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:06.576026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:07.075591  103439 type.go:168] "Request Body" body=""
	I1002 20:52:07.075672  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:07.076053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:07.076126  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:07.575663  103439 type.go:168] "Request Body" body=""
	I1002 20:52:07.575766  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:07.576128  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:08.075654  103439 type.go:168] "Request Body" body=""
	I1002 20:52:08.075729  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:08.076096  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:08.575925  103439 type.go:168] "Request Body" body=""
	I1002 20:52:08.576003  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:08.576346  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:09.076056  103439 type.go:168] "Request Body" body=""
	I1002 20:52:09.076147  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:09.076530  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:09.076595  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:09.576165  103439 type.go:168] "Request Body" body=""
	I1002 20:52:09.576244  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:09.576584  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:10.075437  103439 type.go:168] "Request Body" body=""
	I1002 20:52:10.075510  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:10.075873  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:10.575468  103439 type.go:168] "Request Body" body=""
	I1002 20:52:10.575558  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:10.575906  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:11.075492  103439 type.go:168] "Request Body" body=""
	I1002 20:52:11.075568  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:11.075940  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:11.575529  103439 type.go:168] "Request Body" body=""
	I1002 20:52:11.575621  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:11.575986  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:11.576046  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:12.075605  103439 type.go:168] "Request Body" body=""
	I1002 20:52:12.075682  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:12.076073  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:12.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:52:12.575763  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:12.576125  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:13.075722  103439 type.go:168] "Request Body" body=""
	I1002 20:52:13.075828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:13.076171  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:13.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:52:13.575836  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:13.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:13.576254  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:14.075831  103439 type.go:168] "Request Body" body=""
	I1002 20:52:14.075921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:14.076324  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:14.575966  103439 type.go:168] "Request Body" body=""
	I1002 20:52:14.576045  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:14.576396  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:15.076397  103439 type.go:168] "Request Body" body=""
	I1002 20:52:15.076484  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:15.076845  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:15.575989  103439 type.go:168] "Request Body" body=""
	I1002 20:52:15.576066  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:15.576461  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:15.576526  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:16.076140  103439 type.go:168] "Request Body" body=""
	I1002 20:52:16.076235  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:16.076620  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:16.576345  103439 type.go:168] "Request Body" body=""
	I1002 20:52:16.576420  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:16.576818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:17.075412  103439 type.go:168] "Request Body" body=""
	I1002 20:52:17.075504  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:17.075868  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:17.575510  103439 type.go:168] "Request Body" body=""
	I1002 20:52:17.575592  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:17.575975  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:18.075585  103439 type.go:168] "Request Body" body=""
	I1002 20:52:18.075665  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:18.076061  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:18.076136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:18.575985  103439 type.go:168] "Request Body" body=""
	I1002 20:52:18.576059  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:18.576415  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:19.076058  103439 type.go:168] "Request Body" body=""
	I1002 20:52:19.076159  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:19.076526  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:19.576216  103439 type.go:168] "Request Body" body=""
	I1002 20:52:19.576306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:19.576656  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:20.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:20.075668  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:20.076037  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:20.575615  103439 type.go:168] "Request Body" body=""
	I1002 20:52:20.575692  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:20.576056  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:20.576123  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:21.075653  103439 type.go:168] "Request Body" body=""
	I1002 20:52:21.075760  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:21.076104  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:21.575691  103439 type.go:168] "Request Body" body=""
	I1002 20:52:21.575787  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:21.576159  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:22.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:52:22.075808  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:22.076168  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:22.575725  103439 type.go:168] "Request Body" body=""
	I1002 20:52:22.575823  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:22.576174  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:22.576239  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:23.075794  103439 type.go:168] "Request Body" body=""
	I1002 20:52:23.075868  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:23.076225  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:23.575463  103439 type.go:168] "Request Body" body=""
	I1002 20:52:23.575550  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:23.575980  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:24.075592  103439 type.go:168] "Request Body" body=""
	I1002 20:52:24.075681  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:24.076032  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:24.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:52:24.575768  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:24.576132  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:25.075932  103439 type.go:168] "Request Body" body=""
	I1002 20:52:25.076017  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:25.076379  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:25.076450  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:25.576068  103439 type.go:168] "Request Body" body=""
	I1002 20:52:25.576165  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:25.576567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:26.076267  103439 type.go:168] "Request Body" body=""
	I1002 20:52:26.076346  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:26.076713  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:26.576395  103439 type.go:168] "Request Body" body=""
	I1002 20:52:26.576472  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:26.576858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:27.075411  103439 type.go:168] "Request Body" body=""
	I1002 20:52:27.075491  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:27.075850  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:27.575491  103439 type.go:168] "Request Body" body=""
	I1002 20:52:27.575573  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:27.575964  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:27.576028  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:28.075504  103439 type.go:168] "Request Body" body=""
	I1002 20:52:28.075596  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:28.075950  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:28.575839  103439 type.go:168] "Request Body" body=""
	I1002 20:52:28.576029  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:28.576476  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:29.075757  103439 type.go:168] "Request Body" body=""
	I1002 20:52:29.075848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:29.076242  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:29.575836  103439 type.go:168] "Request Body" body=""
	I1002 20:52:29.575917  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:29.576348  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:29.576430  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:30.076283  103439 type.go:168] "Request Body" body=""
	I1002 20:52:30.076376  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:30.076774  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:30.575345  103439 type.go:168] "Request Body" body=""
	I1002 20:52:30.575422  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:30.575772  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:31.075417  103439 type.go:168] "Request Body" body=""
	I1002 20:52:31.075490  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:31.075917  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:31.575405  103439 type.go:168] "Request Body" body=""
	I1002 20:52:31.575482  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:31.575879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:32.075416  103439 type.go:168] "Request Body" body=""
	I1002 20:52:32.075492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:32.075830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:32.075891  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:32.575384  103439 type.go:168] "Request Body" body=""
	I1002 20:52:32.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:32.575860  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:33.075424  103439 type.go:168] "Request Body" body=""
	I1002 20:52:33.075505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:33.075919  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:33.575575  103439 type.go:168] "Request Body" body=""
	I1002 20:52:33.575659  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:33.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:34.075603  103439 type.go:168] "Request Body" body=""
	I1002 20:52:34.075689  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:34.076059  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:34.076133  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:34.575643  103439 type.go:168] "Request Body" body=""
	I1002 20:52:34.575717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:34.576097  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:35.075919  103439 type.go:168] "Request Body" body=""
	I1002 20:52:35.076001  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:35.076401  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:35.576097  103439 type.go:168] "Request Body" body=""
	I1002 20:52:35.576190  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:35.576569  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:36.076242  103439 type.go:168] "Request Body" body=""
	I1002 20:52:36.076321  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:36.076684  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:36.076771  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:36.576350  103439 type.go:168] "Request Body" body=""
	I1002 20:52:36.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:36.576806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:37.075371  103439 type.go:168] "Request Body" body=""
	I1002 20:52:37.075445  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:37.075830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:37.575379  103439 type.go:168] "Request Body" body=""
	I1002 20:52:37.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:37.575827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:38.075420  103439 type.go:168] "Request Body" body=""
	I1002 20:52:38.075494  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:38.075864  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:38.575408  103439 type.go:168] "Request Body" body=""
	I1002 20:52:38.575505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:38.575831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:38.575904  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:39.075468  103439 type.go:168] "Request Body" body=""
	I1002 20:52:39.075555  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:39.075908  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:39.575486  103439 type.go:168] "Request Body" body=""
	I1002 20:52:39.575564  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:39.575943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:40.075840  103439 type.go:168] "Request Body" body=""
	I1002 20:52:40.075937  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:40.076335  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:40.576013  103439 type.go:168] "Request Body" body=""
	I1002 20:52:40.576104  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:40.576440  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:40.576500  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:41.076194  103439 type.go:168] "Request Body" body=""
	I1002 20:52:41.076306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:41.076712  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:41.575323  103439 type.go:168] "Request Body" body=""
	I1002 20:52:41.575412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:41.575799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:42.075383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:42.075484  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:42.075843  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:42.575392  103439 type.go:168] "Request Body" body=""
	I1002 20:52:42.575469  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:42.575828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:43.075519  103439 type.go:168] "Request Body" body=""
	I1002 20:52:43.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:43.076045  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:43.076121  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:43.575640  103439 type.go:168] "Request Body" body=""
	I1002 20:52:43.575711  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:43.576105  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:44.075717  103439 type.go:168] "Request Body" body=""
	I1002 20:52:44.075847  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:44.076211  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:44.575828  103439 type.go:168] "Request Body" body=""
	I1002 20:52:44.575911  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:44.576256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:45.076131  103439 type.go:168] "Request Body" body=""
	I1002 20:52:45.076212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:45.076558  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:45.076640  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:45.576225  103439 type.go:168] "Request Body" body=""
	I1002 20:52:45.576305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:45.576652  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:46.076299  103439 type.go:168] "Request Body" body=""
	I1002 20:52:46.076380  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:46.076766  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:46.575344  103439 type.go:168] "Request Body" body=""
	I1002 20:52:46.575417  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:46.575789  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:47.075373  103439 type.go:168] "Request Body" body=""
	I1002 20:52:47.075452  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:47.075833  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:47.575383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:47.575467  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:47.575823  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:47.575904  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:48.075383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:48.075461  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:48.075828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:48.575654  103439 type.go:168] "Request Body" body=""
	I1002 20:52:48.575753  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:48.576167  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:49.075788  103439 type.go:168] "Request Body" body=""
	I1002 20:52:49.075878  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:49.076256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:49.575841  103439 type.go:168] "Request Body" body=""
	I1002 20:52:49.575931  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:49.576281  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:49.576341  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:50.076152  103439 type.go:168] "Request Body" body=""
	I1002 20:52:50.076231  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:50.076577  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:50.576298  103439 type.go:168] "Request Body" body=""
	I1002 20:52:50.576372  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:50.576726  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:51.075356  103439 type.go:168] "Request Body" body=""
	I1002 20:52:51.075442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:51.075828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:51.575458  103439 type.go:168] "Request Body" body=""
	I1002 20:52:51.575551  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:51.575985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:52.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:52.075659  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:52.076041  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:52.076130  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:52.575624  103439 type.go:168] "Request Body" body=""
	I1002 20:52:52.575701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:52.576057  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:53.075653  103439 type.go:168] "Request Body" body=""
	I1002 20:52:53.075728  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:53.076123  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:53.575676  103439 type.go:168] "Request Body" body=""
	I1002 20:52:53.575779  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:53.576133  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:54.075709  103439 type.go:168] "Request Body" body=""
	I1002 20:52:54.075829  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:54.076213  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:54.076292  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:54.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:52:54.575875  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:54.576247  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:55.076140  103439 type.go:168] "Request Body" body=""
	I1002 20:52:55.076229  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:55.076568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:55.576341  103439 type.go:168] "Request Body" body=""
	I1002 20:52:55.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:55.576817  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:56.075357  103439 type.go:168] "Request Body" body=""
	I1002 20:52:56.075448  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:56.075831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:56.575413  103439 type.go:168] "Request Body" body=""
	I1002 20:52:56.575503  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:56.575861  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:56.575933  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:57.075427  103439 type.go:168] "Request Body" body=""
	I1002 20:52:57.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:57.076006  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:57.575579  103439 type.go:168] "Request Body" body=""
	I1002 20:52:57.575653  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:57.576016  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:58.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:58.075671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:58.076062  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:58.575986  103439 type.go:168] "Request Body" body=""
	I1002 20:52:58.576070  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:58.576405  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:58.576463  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:59.076072  103439 type.go:168] "Request Body" body=""
	I1002 20:52:59.076176  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:59.076539  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:59.576174  103439 type.go:168] "Request Body" body=""
	I1002 20:52:59.576247  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:59.576606  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:00.075451  103439 type.go:168] "Request Body" body=""
	I1002 20:53:00.075535  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:00.075944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:00.575527  103439 type.go:168] "Request Body" body=""
	I1002 20:53:00.575613  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:00.576021  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:01.075639  103439 type.go:168] "Request Body" body=""
	I1002 20:53:01.075720  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:01.076158  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:01.076236  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:01.575757  103439 type.go:168] "Request Body" body=""
	I1002 20:53:01.575840  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:01.576224  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:02.075855  103439 type.go:168] "Request Body" body=""
	I1002 20:53:02.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:02.076346  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:02.576050  103439 type.go:168] "Request Body" body=""
	I1002 20:53:02.576149  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:02.576502  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:03.076160  103439 type.go:168] "Request Body" body=""
	I1002 20:53:03.076234  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:03.076597  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:03.076676  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:03.575963  103439 type.go:168] "Request Body" body=""
	I1002 20:53:03.576036  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:03.576386  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:04.076077  103439 type.go:168] "Request Body" body=""
	I1002 20:53:04.076167  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:04.076509  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:04.576256  103439 type.go:168] "Request Body" body=""
	I1002 20:53:04.576341  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:04.576710  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:05.075500  103439 type.go:168] "Request Body" body=""
	I1002 20:53:05.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:05.076015  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:05.575620  103439 type.go:168] "Request Body" body=""
	I1002 20:53:05.575699  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:05.576053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:05.576126  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:06.075659  103439 type.go:168] "Request Body" body=""
	I1002 20:53:06.075778  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:06.076160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:06.575713  103439 type.go:168] "Request Body" body=""
	I1002 20:53:06.575808  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:06.576161  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:07.075791  103439 type.go:168] "Request Body" body=""
	I1002 20:53:07.075896  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:07.076278  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:07.575857  103439 type.go:168] "Request Body" body=""
	I1002 20:53:07.575932  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:07.576289  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:07.576361  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:08.075859  103439 type.go:168] "Request Body" body=""
	I1002 20:53:08.075955  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:08.076329  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:08.576047  103439 type.go:168] "Request Body" body=""
	I1002 20:53:08.576136  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:08.576492  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:09.076119  103439 type.go:168] "Request Body" body=""
	I1002 20:53:09.076215  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:09.076582  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:09.576306  103439 type.go:168] "Request Body" body=""
	I1002 20:53:09.576382  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:09.576707  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:09.576802  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:10.075438  103439 type.go:168] "Request Body" body=""
	I1002 20:53:10.075516  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:10.075948  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:10.575530  103439 type.go:168] "Request Body" body=""
	I1002 20:53:10.575609  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:10.575983  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:11.075661  103439 type.go:168] "Request Body" body=""
	I1002 20:53:11.075769  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:11.076130  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:11.575757  103439 type.go:168] "Request Body" body=""
	I1002 20:53:11.575830  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:11.576189  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:12.075811  103439 type.go:168] "Request Body" body=""
	I1002 20:53:12.075891  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:12.076252  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:12.076323  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:12.575823  103439 type.go:168] "Request Body" body=""
	I1002 20:53:12.575896  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:12.576250  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:13.075897  103439 type.go:168] "Request Body" body=""
	I1002 20:53:13.075987  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:13.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:13.576059  103439 type.go:168] "Request Body" body=""
	I1002 20:53:13.576149  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:13.576497  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:14.076230  103439 type.go:168] "Request Body" body=""
	I1002 20:53:14.076305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:14.076648  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:14.076724  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:14.576300  103439 type.go:168] "Request Body" body=""
	I1002 20:53:14.576375  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:14.576711  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:15.075457  103439 type.go:168] "Request Body" body=""
	I1002 20:53:15.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:15.075942  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:15.575476  103439 type.go:168] "Request Body" body=""
	I1002 20:53:15.575564  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:15.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:16.075498  103439 type.go:168] "Request Body" body=""
	I1002 20:53:16.075597  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:16.075974  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:16.575530  103439 type.go:168] "Request Body" body=""
	I1002 20:53:16.575607  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:16.575990  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:16.576057  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:17.075599  103439 type.go:168] "Request Body" body=""
	I1002 20:53:17.075683  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:17.076066  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:17.575633  103439 type.go:168] "Request Body" body=""
	I1002 20:53:17.575706  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:17.576088  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:18.075675  103439 type.go:168] "Request Body" body=""
	I1002 20:53:18.075775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:18.076143  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:18.575997  103439 type.go:168] "Request Body" body=""
	I1002 20:53:18.576068  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:18.576432  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:18.576492  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:19.076147  103439 type.go:168] "Request Body" body=""
	I1002 20:53:19.076228  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:19.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:19.576248  103439 type.go:168] "Request Body" body=""
	I1002 20:53:19.576332  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:19.576675  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:20.075447  103439 type.go:168] "Request Body" body=""
	I1002 20:53:20.075529  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:20.075898  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:20.575465  103439 type.go:168] "Request Body" body=""
	I1002 20:53:20.575538  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:20.575923  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:21.075521  103439 type.go:168] "Request Body" body=""
	I1002 20:53:21.075619  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:21.075978  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:21.076044  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:21.575665  103439 type.go:168] "Request Body" body=""
	I1002 20:53:21.575775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:21.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:22.075717  103439 type.go:168] "Request Body" body=""
	I1002 20:53:22.075828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:22.076183  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:22.575808  103439 type.go:168] "Request Body" body=""
	I1002 20:53:22.575897  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:22.576256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:23.075928  103439 type.go:168] "Request Body" body=""
	I1002 20:53:23.076009  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:23.076405  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:23.076478  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:23.576093  103439 type.go:168] "Request Body" body=""
	I1002 20:53:23.576168  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:23.576558  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:24.076203  103439 type.go:168] "Request Body" body=""
	I1002 20:53:24.076290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:24.076643  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:24.576321  103439 type.go:168] "Request Body" body=""
	I1002 20:53:24.576404  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:24.576814  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:25.075708  103439 type.go:168] "Request Body" body=""
	I1002 20:53:25.075822  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:25.076180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:25.575791  103439 type.go:168] "Request Body" body=""
	I1002 20:53:25.575873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:25.576263  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:25.576328  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:26.075894  103439 type.go:168] "Request Body" body=""
	I1002 20:53:26.075978  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:26.076323  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:26.576003  103439 type.go:168] "Request Body" body=""
	I1002 20:53:26.576076  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:26.576445  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:27.076142  103439 type.go:168] "Request Body" body=""
	I1002 20:53:27.076232  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:27.076600  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:27.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:53:27.576332  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:27.576701  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:27.576806  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:28.076370  103439 type.go:168] "Request Body" body=""
	I1002 20:53:28.076473  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:28.076858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:28.575697  103439 type.go:168] "Request Body" body=""
	I1002 20:53:28.575806  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:28.576163  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:29.075772  103439 type.go:168] "Request Body" body=""
	I1002 20:53:29.075851  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:29.076254  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:29.575812  103439 type.go:168] "Request Body" body=""
	I1002 20:53:29.575887  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:29.576260  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:30.076121  103439 type.go:168] "Request Body" body=""
	I1002 20:53:30.076195  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:30.076543  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:30.076603  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:30.576211  103439 type.go:168] "Request Body" body=""
	I1002 20:53:30.576293  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:30.576650  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:31.076346  103439 type.go:168] "Request Body" body=""
	I1002 20:53:31.076423  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:31.076802  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:31.575356  103439 type.go:168] "Request Body" body=""
	I1002 20:53:31.575434  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:31.575808  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:32.075359  103439 type.go:168] "Request Body" body=""
	I1002 20:53:32.075437  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:32.075799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:32.575336  103439 type.go:168] "Request Body" body=""
	I1002 20:53:32.575410  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:32.575777  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:32.575837  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:33.075392  103439 type.go:168] "Request Body" body=""
	I1002 20:53:33.075475  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:33.075865  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:33.575440  103439 type.go:168] "Request Body" body=""
	I1002 20:53:33.575517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:33.575846  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:34.075534  103439 type.go:168] "Request Body" body=""
	I1002 20:53:34.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:34.075996  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:34.575566  103439 type.go:168] "Request Body" body=""
	I1002 20:53:34.575655  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:34.576020  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:34.576093  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:35.075839  103439 type.go:168] "Request Body" body=""
	I1002 20:53:35.075921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:35.076292  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:35.575879  103439 type.go:168] "Request Body" body=""
	I1002 20:53:35.575953  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:35.576311  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:36.075998  103439 type.go:168] "Request Body" body=""
	I1002 20:53:36.076095  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:36.076469  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:36.576150  103439 type.go:168] "Request Body" body=""
	I1002 20:53:36.576229  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:36.576577  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:36.576639  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:37.076335  103439 type.go:168] "Request Body" body=""
	I1002 20:53:37.076417  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:37.076801  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:37.575377  103439 type.go:168] "Request Body" body=""
	I1002 20:53:37.575453  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:37.575879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:38.075474  103439 type.go:168] "Request Body" body=""
	I1002 20:53:38.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:38.075957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:38.575859  103439 type.go:168] "Request Body" body=""
	I1002 20:53:38.575935  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:38.576296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:39.076017  103439 type.go:168] "Request Body" body=""
	I1002 20:53:39.076111  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:39.076475  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:39.076596  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:39.576181  103439 type.go:168] "Request Body" body=""
	I1002 20:53:39.576257  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:39.576614  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:40.075456  103439 type.go:168] "Request Body" body=""
	I1002 20:53:40.075533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:40.075956  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:40.575509  103439 type.go:168] "Request Body" body=""
	I1002 20:53:40.575586  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:40.575951  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:41.075524  103439 type.go:168] "Request Body" body=""
	I1002 20:53:41.075607  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:41.075983  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:41.575591  103439 type.go:168] "Request Body" body=""
	I1002 20:53:41.575678  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:41.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:41.576118  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:42.075648  103439 type.go:168] "Request Body" body=""
	I1002 20:53:42.075731  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:42.076108  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:42.575677  103439 type.go:168] "Request Body" body=""
	I1002 20:53:42.575790  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:42.576150  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:43.075731  103439 type.go:168] "Request Body" body=""
	I1002 20:53:43.075831  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:43.076198  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:43.575889  103439 type.go:168] "Request Body" body=""
	I1002 20:53:43.575972  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:43.576366  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:43.576426  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:44.075602  103439 type.go:168] "Request Body" body=""
	I1002 20:53:44.075701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:44.076125  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:44.575700  103439 type.go:168] "Request Body" body=""
	I1002 20:53:44.575816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:44.576238  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:45.076167  103439 type.go:168] "Request Body" body=""
	I1002 20:53:45.076247  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:45.076676  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:45.576379  103439 type.go:168] "Request Body" body=""
	I1002 20:53:45.576462  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:45.576855  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:45.576932  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:46.075425  103439 type.go:168] "Request Body" body=""
	I1002 20:53:46.075515  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:46.075882  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:46.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:53:46.575563  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:46.575944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:47.075576  103439 type.go:168] "Request Body" body=""
	I1002 20:53:47.075649  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:47.076028  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:47.575645  103439 type.go:168] "Request Body" body=""
	I1002 20:53:47.575724  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:47.576173  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:48.075842  103439 type.go:168] "Request Body" body=""
	I1002 20:53:48.075922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:48.076288  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:48.076360  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:48.576176  103439 type.go:168] "Request Body" body=""
	I1002 20:53:48.576259  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:48.576606  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:49.076289  103439 type.go:168] "Request Body" body=""
	I1002 20:53:49.076364  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:49.076718  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:49.575397  103439 type.go:168] "Request Body" body=""
	I1002 20:53:49.575476  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:49.575864  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:50.075484  103439 type.go:168] "Request Body" body=""
	I1002 20:53:50.075575  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:50.075985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:50.575634  103439 type.go:168] "Request Body" body=""
	I1002 20:53:50.575725  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:50.576140  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:50.576223  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:51.075766  103439 type.go:168] "Request Body" body=""
	I1002 20:53:51.075855  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:51.076251  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:51.575845  103439 type.go:168] "Request Body" body=""
	I1002 20:53:51.575936  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:51.576310  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:52.076007  103439 type.go:168] "Request Body" body=""
	I1002 20:53:52.076100  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:52.076512  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:52.576200  103439 type.go:168] "Request Body" body=""
	I1002 20:53:52.576311  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:52.576659  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:52.576723  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:53.076346  103439 type.go:168] "Request Body" body=""
	I1002 20:53:53.076426  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:53.076819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:53.575357  103439 type.go:168] "Request Body" body=""
	I1002 20:53:53.575435  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:53.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:54.075408  103439 type.go:168] "Request Body" body=""
	I1002 20:53:54.075485  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:54.075889  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:54.575457  103439 type.go:168] "Request Body" body=""
	I1002 20:53:54.575534  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:54.575882  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:55.075838  103439 type.go:168] "Request Body" body=""
	I1002 20:53:55.075915  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:55.076266  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:55.076327  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:55.575878  103439 type.go:168] "Request Body" body=""
	I1002 20:53:55.575957  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:55.576307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:56.075931  103439 type.go:168] "Request Body" body=""
	I1002 20:53:56.076017  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:56.076382  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:56.576046  103439 type.go:168] "Request Body" body=""
	I1002 20:53:56.576133  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:56.576476  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:57.076106  103439 type.go:168] "Request Body" body=""
	I1002 20:53:57.076183  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:57.076505  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:57.076565  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:57.576226  103439 type.go:168] "Request Body" body=""
	I1002 20:53:57.576298  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:57.576629  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:58.076297  103439 type.go:168] "Request Body" body=""
	I1002 20:53:58.076394  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:58.076731  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:58.575639  103439 type.go:168] "Request Body" body=""
	I1002 20:53:58.575725  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:58.576105  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:59.075691  103439 type.go:168] "Request Body" body=""
	I1002 20:53:59.075862  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:59.076223  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:59.575805  103439 type.go:168] "Request Body" body=""
	I1002 20:53:59.575887  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:59.576267  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:59.576342  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:00.076234  103439 type.go:168] "Request Body" body=""
	I1002 20:54:00.076318  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:00.076665  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:00.576298  103439 type.go:168] "Request Body" body=""
	I1002 20:54:00.576374  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:00.576723  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:01.075366  103439 type.go:168] "Request Body" body=""
	I1002 20:54:01.075454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:01.075825  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:01.575447  103439 type.go:168] "Request Body" body=""
	I1002 20:54:01.575533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:01.575904  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:02.075556  103439 type.go:168] "Request Body" body=""
	I1002 20:54:02.075644  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:02.076053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:02.076132  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:02.575602  103439 type.go:168] "Request Body" body=""
	I1002 20:54:02.575678  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:02.576035  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:03.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:03.075713  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:03.076098  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:03.575655  103439 type.go:168] "Request Body" body=""
	I1002 20:54:03.575732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:03.576098  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:04.075645  103439 type.go:168] "Request Body" body=""
	I1002 20:54:04.075732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:04.076102  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:04.076162  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:04.575674  103439 type.go:168] "Request Body" body=""
	I1002 20:54:04.575774  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:04.576120  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:05.075981  103439 type.go:168] "Request Body" body=""
	I1002 20:54:05.076063  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:05.076424  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:05.576045  103439 type.go:168] "Request Body" body=""
	I1002 20:54:05.576128  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:05.576498  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:06.076278  103439 type.go:168] "Request Body" body=""
	I1002 20:54:06.076361  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:06.076719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:06.076815  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:06.575347  103439 type.go:168] "Request Body" body=""
	I1002 20:54:06.575428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:06.575821  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:07.075435  103439 type.go:168] "Request Body" body=""
	I1002 20:54:07.075516  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:07.075897  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:07.575486  103439 type.go:168] "Request Body" body=""
	I1002 20:54:07.575563  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:07.575958  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:08.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:08.075701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:08.076060  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:08.575979  103439 type.go:168] "Request Body" body=""
	I1002 20:54:08.576066  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:08.576467  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:08.576529  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:09.076208  103439 type.go:168] "Request Body" body=""
	I1002 20:54:09.076292  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:09.076707  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:09.576320  103439 type.go:168] "Request Body" body=""
	I1002 20:54:09.576395  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:09.576817  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:10.075592  103439 type.go:168] "Request Body" body=""
	I1002 20:54:10.075669  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:10.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:10.575606  103439 type.go:168] "Request Body" body=""
	I1002 20:54:10.575688  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:10.576056  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:11.075680  103439 type.go:168] "Request Body" body=""
	I1002 20:54:11.075788  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:11.076183  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:11.076274  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:11.575788  103439 type.go:168] "Request Body" body=""
	I1002 20:54:11.575870  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:11.576222  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:12.075860  103439 type.go:168] "Request Body" body=""
	I1002 20:54:12.075940  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:12.076307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:12.575971  103439 type.go:168] "Request Body" body=""
	I1002 20:54:12.576043  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:12.576403  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:13.076171  103439 type.go:168] "Request Body" body=""
	I1002 20:54:13.076258  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:13.076628  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:13.076688  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:13.576261  103439 type.go:168] "Request Body" body=""
	I1002 20:54:13.576339  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:13.576685  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:14.076408  103439 type.go:168] "Request Body" body=""
	I1002 20:54:14.076488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:14.076857  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:14.575484  103439 type.go:168] "Request Body" body=""
	I1002 20:54:14.575582  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:14.575948  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:15.075808  103439 type.go:168] "Request Body" body=""
	I1002 20:54:15.075891  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:15.076275  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:15.575894  103439 type.go:168] "Request Body" body=""
	I1002 20:54:15.575975  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:15.576435  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:15.576516  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:16.076119  103439 type.go:168] "Request Body" body=""
	I1002 20:54:16.076226  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:16.076603  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:16.576326  103439 type.go:168] "Request Body" body=""
	I1002 20:54:16.576403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:16.576788  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:17.075351  103439 type.go:168] "Request Body" body=""
	I1002 20:54:17.075430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:17.075787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:17.575401  103439 type.go:168] "Request Body" body=""
	I1002 20:54:17.575559  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:17.575961  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:18.075538  103439 type.go:168] "Request Body" body=""
	I1002 20:54:18.075619  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:18.075997  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:18.076063  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:18.575954  103439 type.go:168] "Request Body" body=""
	I1002 20:54:18.576031  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:18.576391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:19.076057  103439 type.go:168] "Request Body" body=""
	I1002 20:54:19.076145  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:19.076521  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:19.576266  103439 type.go:168] "Request Body" body=""
	I1002 20:54:19.576354  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:19.576728  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:20.075522  103439 type.go:168] "Request Body" body=""
	I1002 20:54:20.075613  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:20.075992  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:20.575620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:20.575699  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:20.576111  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:20.576172  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:21.075690  103439 type.go:168] "Request Body" body=""
	I1002 20:54:21.075834  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:21.076211  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:21.575853  103439 type.go:168] "Request Body" body=""
	I1002 20:54:21.575938  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:21.576327  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:22.076012  103439 type.go:168] "Request Body" body=""
	I1002 20:54:22.076106  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:22.076455  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:22.576180  103439 type.go:168] "Request Body" body=""
	I1002 20:54:22.576267  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:22.576639  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:22.576703  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:23.076280  103439 type.go:168] "Request Body" body=""
	I1002 20:54:23.076362  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:23.076729  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:23.575332  103439 type.go:168] "Request Body" body=""
	I1002 20:54:23.575409  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:23.575788  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:24.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:54:24.075455  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:24.075827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:24.575436  103439 type.go:168] "Request Body" body=""
	I1002 20:54:24.575524  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:24.575897  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:25.075680  103439 type.go:168] "Request Body" body=""
	I1002 20:54:25.075782  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:25.076141  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:25.076204  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:25.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:54:25.575836  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:25.576238  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:26.075827  103439 type.go:168] "Request Body" body=""
	I1002 20:54:26.075905  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:26.076277  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:26.576092  103439 type.go:168] "Request Body" body=""
	I1002 20:54:26.576245  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:26.576650  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:27.076357  103439 type.go:168] "Request Body" body=""
	I1002 20:54:27.076442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:27.076807  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:27.076864  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:27.575463  103439 type.go:168] "Request Body" body=""
	I1002 20:54:27.575541  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:27.576016  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:28.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:28.075717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:28.076117  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:28.576130  103439 type.go:168] "Request Body" body=""
	I1002 20:54:28.576214  103439 node_ready.go:38] duration metric: took 6m0.001003861s for node "functional-012915" to be "Ready" ...
	I1002 20:54:28.579396  103439 out.go:203] 
	W1002 20:54:28.581273  103439 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:54:28.581294  103439 out.go:285] * 
	* 
	W1002 20:54:28.583020  103439 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:54:28.584974  103439 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-amd64 start -p functional-012915 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m4.309096722s for "functional-012915" cluster.
I1002 20:54:29.065981   84100 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (307.73085ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-072312                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-072312   │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ start   │ --download-only -p download-docker-272222 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-272222 │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ delete  │ -p download-docker-272222                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-272222 │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ start   │ --download-only -p binary-mirror-809560 --alsologtostderr --binary-mirror http://127.0.0.1:39541 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-809560   │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ delete  │ -p binary-mirror-809560                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-809560   │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ disable dashboard -p addons-436069                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ addons  │ enable dashboard -p addons-436069                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ start   │ -p addons-436069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ delete  │ -p addons-436069                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ start   │ -p nospam-461767 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-461767 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start   │ nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ start   │ nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ start   │ nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ pause   │ nospam-461767 --log_dir /tmp/nospam-461767 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ pause   │ nospam-461767 --log_dir /tmp/nospam-461767 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ pause   │ nospam-461767 --log_dir /tmp/nospam-461767 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ delete  │ -p nospam-461767                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ start   │ -p functional-012915 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-012915      │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │                     │
	│ start   │ -p functional-012915 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-012915      │ jenkins │ v1.37.0 │ 02 Oct 25 20:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:48:24
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:48:24.799042  103439 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:48:24.799301  103439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:24.799310  103439 out.go:374] Setting ErrFile to fd 2...
	I1002 20:48:24.799319  103439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:24.799517  103439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:48:24.799997  103439 out.go:368] Setting JSON to false
	I1002 20:48:24.800864  103439 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9046,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:48:24.800953  103439 start.go:140] virtualization: kvm guest
	I1002 20:48:24.803402  103439 out.go:179] * [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:48:24.804691  103439 notify.go:220] Checking for updates...
	I1002 20:48:24.804714  103439 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:48:24.806239  103439 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:48:24.807535  103439 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:24.808966  103439 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:48:24.810229  103439 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:48:24.811490  103439 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:48:24.813239  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:24.813364  103439 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:48:24.837336  103439 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:48:24.837438  103439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:48:24.897484  103439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:48:24.886469072 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:48:24.897616  103439 docker.go:318] overlay module found
	I1002 20:48:24.900384  103439 out.go:179] * Using the docker driver based on existing profile
	I1002 20:48:24.901640  103439 start.go:304] selected driver: docker
	I1002 20:48:24.901656  103439 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:24.901817  103439 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:48:24.901921  103439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:48:24.957281  103439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:48:24.94713494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:48:24.957915  103439 cni.go:84] Creating CNI manager for ""
	I1002 20:48:24.957982  103439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:48:24.958030  103439 start.go:348] cluster config:
	{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:24.959902  103439 out.go:179] * Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	I1002 20:48:24.961424  103439 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 20:48:24.962912  103439 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:48:24.964111  103439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:48:24.964148  103439 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:48:24.964157  103439 cache.go:58] Caching tarball of preloaded images
	I1002 20:48:24.964205  103439 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:48:24.964264  103439 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:48:24.964275  103439 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:48:24.964363  103439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json ...
	I1002 20:48:24.984848  103439 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:48:24.984867  103439 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:48:24.984883  103439 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:48:24.984905  103439 start.go:360] acquireMachinesLock for functional-012915: {Name:mk05b0465db6f8234fcb55c21a78a37886923b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:48:24.984974  103439 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "functional-012915"
	I1002 20:48:24.984991  103439 start.go:96] Skipping create...Using existing machine configuration
	I1002 20:48:24.984998  103439 fix.go:54] fixHost starting: 
	I1002 20:48:24.985199  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:25.001871  103439 fix.go:112] recreateIfNeeded on functional-012915: state=Running err=<nil>
	W1002 20:48:25.001898  103439 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 20:48:25.003929  103439 out.go:252] * Updating the running docker "functional-012915" container ...
	I1002 20:48:25.003964  103439 machine.go:93] provisionDockerMachine start ...
	I1002 20:48:25.004037  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.020996  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.021230  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.021243  103439 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:48:25.163676  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:48:25.163710  103439 ubuntu.go:182] provisioning hostname "functional-012915"
	I1002 20:48:25.163781  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.181773  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.181995  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.182012  103439 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-012915 && echo "functional-012915" | sudo tee /etc/hostname
	I1002 20:48:25.333959  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:48:25.334023  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.352331  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.352586  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.352605  103439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-012915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-012915/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-012915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:48:25.495627  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:48:25.495660  103439 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 20:48:25.495680  103439 ubuntu.go:190] setting up certificates
	I1002 20:48:25.495691  103439 provision.go:84] configureAuth start
	I1002 20:48:25.495761  103439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:48:25.513229  103439 provision.go:143] copyHostCerts
	I1002 20:48:25.513269  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:48:25.513297  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 20:48:25.513309  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:48:25.513378  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 20:48:25.513471  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:48:25.513489  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 20:48:25.513496  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:48:25.513524  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 20:48:25.513585  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:48:25.513606  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 20:48:25.513612  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:48:25.513642  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 20:48:25.513706  103439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.functional-012915 san=[127.0.0.1 192.168.49.2 functional-012915 localhost minikube]
	I1002 20:48:25.699700  103439 provision.go:177] copyRemoteCerts
	I1002 20:48:25.699774  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:48:25.699818  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.717132  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:25.819529  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:48:25.819590  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:48:25.836961  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:48:25.837026  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:48:25.853991  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:48:25.854053  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:48:25.872348  103439 provision.go:87] duration metric: took 376.642239ms to configureAuth
	I1002 20:48:25.872378  103439 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:48:25.872536  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:25.872653  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.891454  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.891685  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.891706  103439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:48:26.156804  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:48:26.156829  103439 machine.go:96] duration metric: took 1.152858016s to provisionDockerMachine
	I1002 20:48:26.156858  103439 start.go:293] postStartSetup for "functional-012915" (driver="docker")
	I1002 20:48:26.156868  103439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:48:26.156920  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:48:26.156969  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.176188  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.278892  103439 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:48:26.282350  103439 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 20:48:26.282380  103439 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 20:48:26.282385  103439 command_runner.go:130] > VERSION_ID="12"
	I1002 20:48:26.282389  103439 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 20:48:26.282393  103439 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 20:48:26.282397  103439 command_runner.go:130] > ID=debian
	I1002 20:48:26.282401  103439 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 20:48:26.282406  103439 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 20:48:26.282410  103439 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 20:48:26.282454  103439 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:48:26.282471  103439 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:48:26.282480  103439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 20:48:26.282532  103439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 20:48:26.282613  103439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 20:48:26.282622  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 20:48:26.282689  103439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> hosts in /etc/test/nested/copy/84100
	I1002 20:48:26.282696  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> /etc/test/nested/copy/84100/hosts
	I1002 20:48:26.282728  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/84100
	I1002 20:48:26.291027  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:48:26.308674  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts --> /etc/test/nested/copy/84100/hosts (40 bytes)
	I1002 20:48:26.325806  103439 start.go:296] duration metric: took 168.930408ms for postStartSetup
	I1002 20:48:26.325916  103439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:48:26.325957  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.343664  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.443702  103439 command_runner.go:130] > 54%
	I1002 20:48:26.443812  103439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:48:26.449039  103439 command_runner.go:130] > 135G
	I1002 20:48:26.449077  103439 fix.go:56] duration metric: took 1.464076482s for fixHost
	I1002 20:48:26.449092  103439 start.go:83] releasing machines lock for "functional-012915", held for 1.464107586s
	I1002 20:48:26.449173  103439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:48:26.467196  103439 ssh_runner.go:195] Run: cat /version.json
	I1002 20:48:26.467258  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.467342  103439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:48:26.467420  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.485438  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.485701  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.633417  103439 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 20:48:26.635353  103439 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 20:48:26.635549  103439 ssh_runner.go:195] Run: systemctl --version
	I1002 20:48:26.642439  103439 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 20:48:26.642484  103439 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 20:48:26.642544  103439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:48:26.678549  103439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 20:48:26.683206  103439 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 20:48:26.683277  103439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:48:26.683333  103439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:48:26.691349  103439 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:48:26.691374  103439 start.go:495] detecting cgroup driver to use...
	I1002 20:48:26.691404  103439 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:48:26.691448  103439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:48:26.705612  103439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:48:26.718317  103439 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:48:26.718372  103439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:48:26.732790  103439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:48:26.745127  103439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:48:26.830208  103439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:48:26.916089  103439 docker.go:234] disabling docker service ...
	I1002 20:48:26.916158  103439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:48:26.931041  103439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:48:26.944314  103439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:48:27.029050  103439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:48:27.113127  103439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:48:27.125650  103439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:48:27.138813  103439 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 20:48:27.139624  103439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:48:27.139683  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.148622  103439 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:48:27.148678  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.157772  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.166537  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.175276  103439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:48:27.183311  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.192091  103439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.200250  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.208827  103439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:48:27.216057  103439 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 20:48:27.216134  103439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:48:27.223341  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:27.309631  103439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:48:27.427286  103439 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:48:27.427366  103439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:48:27.431839  103439 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 20:48:27.431866  103439 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 20:48:27.431885  103439 command_runner.go:130] > Device: 0,59	Inode: 3822        Links: 1
	I1002 20:48:27.431892  103439 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:48:27.431897  103439 command_runner.go:130] > Access: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431903  103439 command_runner.go:130] > Modify: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431907  103439 command_runner.go:130] > Change: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431912  103439 command_runner.go:130] >  Birth: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431962  103439 start.go:563] Will wait 60s for crictl version
	I1002 20:48:27.432014  103439 ssh_runner.go:195] Run: which crictl
	I1002 20:48:27.435939  103439 command_runner.go:130] > /usr/local/bin/crictl
	I1002 20:48:27.436036  103439 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:48:27.458416  103439 command_runner.go:130] > Version:  0.1.0
	I1002 20:48:27.458438  103439 command_runner.go:130] > RuntimeName:  cri-o
	I1002 20:48:27.458443  103439 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 20:48:27.458448  103439 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 20:48:27.460155  103439 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:48:27.460222  103439 ssh_runner.go:195] Run: crio --version
	I1002 20:48:27.486159  103439 command_runner.go:130] > crio version 1.34.1
	I1002 20:48:27.486183  103439 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:48:27.486190  103439 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:48:27.486198  103439 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:48:27.486205  103439 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:48:27.486212  103439 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:48:27.486219  103439 command_runner.go:130] >    Compiler:       gc
	I1002 20:48:27.486226  103439 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:48:27.486237  103439 command_runner.go:130] >    Linkmode:       static
	I1002 20:48:27.486246  103439 command_runner.go:130] >    BuildTags:
	I1002 20:48:27.486251  103439 command_runner.go:130] >      static
	I1002 20:48:27.486259  103439 command_runner.go:130] >      netgo
	I1002 20:48:27.486263  103439 command_runner.go:130] >      osusergo
	I1002 20:48:27.486266  103439 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:48:27.486272  103439 command_runner.go:130] >      seccomp
	I1002 20:48:27.486276  103439 command_runner.go:130] >      apparmor
	I1002 20:48:27.486300  103439 command_runner.go:130] >      selinux
	I1002 20:48:27.486312  103439 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:48:27.486330  103439 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:48:27.486339  103439 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:48:27.487532  103439 ssh_runner.go:195] Run: crio --version
	I1002 20:48:27.514593  103439 command_runner.go:130] > crio version 1.34.1
	I1002 20:48:27.514624  103439 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:48:27.514630  103439 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:48:27.514634  103439 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:48:27.514639  103439 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:48:27.514643  103439 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:48:27.514647  103439 command_runner.go:130] >    Compiler:       gc
	I1002 20:48:27.514654  103439 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:48:27.514658  103439 command_runner.go:130] >    Linkmode:       static
	I1002 20:48:27.514662  103439 command_runner.go:130] >    BuildTags:
	I1002 20:48:27.514665  103439 command_runner.go:130] >      static
	I1002 20:48:27.514668  103439 command_runner.go:130] >      netgo
	I1002 20:48:27.514677  103439 command_runner.go:130] >      osusergo
	I1002 20:48:27.514685  103439 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:48:27.514688  103439 command_runner.go:130] >      seccomp
	I1002 20:48:27.514691  103439 command_runner.go:130] >      apparmor
	I1002 20:48:27.514695  103439 command_runner.go:130] >      selinux
	I1002 20:48:27.514699  103439 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:48:27.514706  103439 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:48:27.514709  103439 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:48:27.516768  103439 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:48:27.518063  103439 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:48:27.535001  103439 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:48:27.539645  103439 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 20:48:27.539759  103439 kubeadm.go:883] updating cluster {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:48:27.539875  103439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:48:27.539928  103439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:48:27.571471  103439 command_runner.go:130] > {
	I1002 20:48:27.571489  103439 command_runner.go:130] >   "images":  [
	I1002 20:48:27.571493  103439 command_runner.go:130] >     {
	I1002 20:48:27.571502  103439 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:48:27.571507  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571513  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:48:27.571516  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571520  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571528  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:48:27.571535  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:48:27.571539  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571543  103439 command_runner.go:130] >       "size":  "109379124",
	I1002 20:48:27.571547  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571554  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571560  103439 command_runner.go:130] >     },
	I1002 20:48:27.571568  103439 command_runner.go:130] >     {
	I1002 20:48:27.571574  103439 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:48:27.571577  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571583  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:48:27.571588  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571592  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571600  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:48:27.571610  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:48:27.571616  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571620  103439 command_runner.go:130] >       "size":  "31470524",
	I1002 20:48:27.571626  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571633  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571644  103439 command_runner.go:130] >     },
	I1002 20:48:27.571650  103439 command_runner.go:130] >     {
	I1002 20:48:27.571656  103439 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:48:27.571662  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571667  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:48:27.571672  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571676  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571685  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:48:27.571694  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:48:27.571700  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571704  103439 command_runner.go:130] >       "size":  "76103547",
	I1002 20:48:27.571710  103439 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:48:27.571714  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571719  103439 command_runner.go:130] >     },
	I1002 20:48:27.571721  103439 command_runner.go:130] >     {
	I1002 20:48:27.571727  103439 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:48:27.571733  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571752  103439 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:48:27.571758  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571767  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571778  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:48:27.571787  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:48:27.571792  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571796  103439 command_runner.go:130] >       "size":  "195976448",
	I1002 20:48:27.571802  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571805  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.571810  103439 command_runner.go:130] >       },
	I1002 20:48:27.571824  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571831  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571834  103439 command_runner.go:130] >     },
	I1002 20:48:27.571838  103439 command_runner.go:130] >     {
	I1002 20:48:27.571844  103439 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:48:27.571850  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571859  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:48:27.571866  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571870  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571879  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:48:27.571888  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:48:27.571894  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571898  103439 command_runner.go:130] >       "size":  "89046001",
	I1002 20:48:27.571903  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571907  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.571913  103439 command_runner.go:130] >       },
	I1002 20:48:27.571916  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571922  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571925  103439 command_runner.go:130] >     },
	I1002 20:48:27.571931  103439 command_runner.go:130] >     {
	I1002 20:48:27.571937  103439 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:48:27.571943  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571948  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:48:27.571953  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571957  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571967  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:48:27.571976  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:48:27.571981  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571985  103439 command_runner.go:130] >       "size":  "76004181",
	I1002 20:48:27.571991  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571994  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.572000  103439 command_runner.go:130] >       },
	I1002 20:48:27.572003  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572009  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572012  103439 command_runner.go:130] >     },
	I1002 20:48:27.572015  103439 command_runner.go:130] >     {
	I1002 20:48:27.572023  103439 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:48:27.572027  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572038  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:48:27.572048  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572054  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572061  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:48:27.572070  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:48:27.572076  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572080  103439 command_runner.go:130] >       "size":  "73138073",
	I1002 20:48:27.572085  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572089  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572095  103439 command_runner.go:130] >     },
	I1002 20:48:27.572098  103439 command_runner.go:130] >     {
	I1002 20:48:27.572106  103439 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:48:27.572109  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572114  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:48:27.572119  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572123  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572132  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:48:27.572157  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:48:27.572163  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572167  103439 command_runner.go:130] >       "size":  "53844823",
	I1002 20:48:27.572172  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.572175  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.572180  103439 command_runner.go:130] >       },
	I1002 20:48:27.572184  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572189  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572192  103439 command_runner.go:130] >     },
	I1002 20:48:27.572197  103439 command_runner.go:130] >     {
	I1002 20:48:27.572203  103439 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:48:27.572206  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572213  103439 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.572217  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572222  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572229  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:48:27.572237  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:48:27.572248  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572254  103439 command_runner.go:130] >       "size":  "742092",
	I1002 20:48:27.572258  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.572263  103439 command_runner.go:130] >         "value":  "65535"
	I1002 20:48:27.572267  103439 command_runner.go:130] >       },
	I1002 20:48:27.572273  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572282  103439 command_runner.go:130] >       "pinned":  true
	I1002 20:48:27.572288  103439 command_runner.go:130] >     }
	I1002 20:48:27.572291  103439 command_runner.go:130] >   ]
	I1002 20:48:27.572295  103439 command_runner.go:130] > }
	I1002 20:48:27.573606  103439 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:48:27.573628  103439 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:48:27.573687  103439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:48:27.599395  103439 command_runner.go:130] > {
	I1002 20:48:27.599418  103439 command_runner.go:130] >   "images":  [
	I1002 20:48:27.599424  103439 command_runner.go:130] >     {
	I1002 20:48:27.599434  103439 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:48:27.599439  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599447  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:48:27.599452  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599460  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599473  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:48:27.599500  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:48:27.599510  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599518  103439 command_runner.go:130] >       "size":  "109379124",
	I1002 20:48:27.599526  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599540  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599549  103439 command_runner.go:130] >     },
	I1002 20:48:27.599555  103439 command_runner.go:130] >     {
	I1002 20:48:27.599575  103439 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:48:27.599582  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599590  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:48:27.599596  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599604  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599624  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:48:27.599640  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:48:27.599648  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599656  103439 command_runner.go:130] >       "size":  "31470524",
	I1002 20:48:27.599664  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599676  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599684  103439 command_runner.go:130] >     },
	I1002 20:48:27.599690  103439 command_runner.go:130] >     {
	I1002 20:48:27.599703  103439 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:48:27.599713  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599722  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:48:27.599730  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599754  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599770  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:48:27.599783  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:48:27.599791  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599798  103439 command_runner.go:130] >       "size":  "76103547",
	I1002 20:48:27.599808  103439 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:48:27.599815  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599823  103439 command_runner.go:130] >     },
	I1002 20:48:27.599829  103439 command_runner.go:130] >     {
	I1002 20:48:27.599840  103439 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:48:27.599849  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599858  103439 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:48:27.599865  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599873  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599887  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:48:27.599901  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:48:27.599918  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599927  103439 command_runner.go:130] >       "size":  "195976448",
	I1002 20:48:27.599934  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.599942  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.599948  103439 command_runner.go:130] >       },
	I1002 20:48:27.599974  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599984  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599989  103439 command_runner.go:130] >     },
	I1002 20:48:27.599994  103439 command_runner.go:130] >     {
	I1002 20:48:27.600004  103439 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:48:27.600013  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600021  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:48:27.600029  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600036  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600050  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:48:27.600065  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:48:27.600073  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600080  103439 command_runner.go:130] >       "size":  "89046001",
	I1002 20:48:27.600089  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600103  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600112  103439 command_runner.go:130] >       },
	I1002 20:48:27.600119  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600128  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600134  103439 command_runner.go:130] >     },
	I1002 20:48:27.600142  103439 command_runner.go:130] >     {
	I1002 20:48:27.600152  103439 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:48:27.600161  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600171  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:48:27.600179  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600185  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600199  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:48:27.600213  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:48:27.600220  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600233  103439 command_runner.go:130] >       "size":  "76004181",
	I1002 20:48:27.600242  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600250  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600258  103439 command_runner.go:130] >       },
	I1002 20:48:27.600264  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600273  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600278  103439 command_runner.go:130] >     },
	I1002 20:48:27.600284  103439 command_runner.go:130] >     {
	I1002 20:48:27.600297  103439 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:48:27.600306  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600315  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:48:27.600332  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600339  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600354  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:48:27.600368  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:48:27.600376  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600383  103439 command_runner.go:130] >       "size":  "73138073",
	I1002 20:48:27.600393  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600401  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600410  103439 command_runner.go:130] >     },
	I1002 20:48:27.600415  103439 command_runner.go:130] >     {
	I1002 20:48:27.600423  103439 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:48:27.600428  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600437  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:48:27.600446  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600452  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600464  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:48:27.600497  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:48:27.600505  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600513  103439 command_runner.go:130] >       "size":  "53844823",
	I1002 20:48:27.600520  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600527  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600536  103439 command_runner.go:130] >       },
	I1002 20:48:27.600554  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600563  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600569  103439 command_runner.go:130] >     },
	I1002 20:48:27.600574  103439 command_runner.go:130] >     {
	I1002 20:48:27.600585  103439 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:48:27.600594  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600603  103439 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.600611  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600618  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600631  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:48:27.600643  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:48:27.600652  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600659  103439 command_runner.go:130] >       "size":  "742092",
	I1002 20:48:27.600668  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600676  103439 command_runner.go:130] >         "value":  "65535"
	I1002 20:48:27.600684  103439 command_runner.go:130] >       },
	I1002 20:48:27.600692  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600701  103439 command_runner.go:130] >       "pinned":  true
	I1002 20:48:27.600708  103439 command_runner.go:130] >     }
	I1002 20:48:27.600716  103439 command_runner.go:130] >   ]
	I1002 20:48:27.600721  103439 command_runner.go:130] > }
	I1002 20:48:27.600844  103439 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:48:27.600859  103439 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:48:27.600868  103439 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:48:27.600982  103439 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:48:27.601057  103439 ssh_runner.go:195] Run: crio config
	I1002 20:48:27.642390  103439 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 20:48:27.642423  103439 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 20:48:27.642435  103439 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 20:48:27.642439  103439 command_runner.go:130] > #
	I1002 20:48:27.642450  103439 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 20:48:27.642460  103439 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 20:48:27.642470  103439 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 20:48:27.642501  103439 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 20:48:27.642510  103439 command_runner.go:130] > # reload'.
	I1002 20:48:27.642520  103439 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 20:48:27.642532  103439 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 20:48:27.642543  103439 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 20:48:27.642558  103439 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 20:48:27.642563  103439 command_runner.go:130] > [crio]
	I1002 20:48:27.642572  103439 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 20:48:27.642580  103439 command_runner.go:130] > # containers images, in this directory.
	I1002 20:48:27.642602  103439 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 20:48:27.642618  103439 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 20:48:27.642627  103439 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 20:48:27.642637  103439 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 20:48:27.642643  103439 command_runner.go:130] > # imagestore = ""
	I1002 20:48:27.642656  103439 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 20:48:27.642670  103439 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 20:48:27.642681  103439 command_runner.go:130] > # storage_driver = "overlay"
	I1002 20:48:27.642691  103439 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 20:48:27.642708  103439 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 20:48:27.642715  103439 command_runner.go:130] > # storage_option = [
	I1002 20:48:27.642723  103439 command_runner.go:130] > # ]
	I1002 20:48:27.642733  103439 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 20:48:27.642762  103439 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 20:48:27.642770  103439 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 20:48:27.642783  103439 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 20:48:27.642796  103439 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 20:48:27.642804  103439 command_runner.go:130] > # always happen on a node reboot
	I1002 20:48:27.642814  103439 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 20:48:27.642844  103439 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 20:48:27.642859  103439 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 20:48:27.642869  103439 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 20:48:27.642883  103439 command_runner.go:130] > # version_file_persist = ""
	I1002 20:48:27.642895  103439 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 20:48:27.642919  103439 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 20:48:27.642930  103439 command_runner.go:130] > # internal_wipe = true
	I1002 20:48:27.642942  103439 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 20:48:27.642957  103439 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 20:48:27.642963  103439 command_runner.go:130] > # internal_repair = true
	I1002 20:48:27.642972  103439 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 20:48:27.642981  103439 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 20:48:27.642990  103439 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 20:48:27.642998  103439 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 20:48:27.643012  103439 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 20:48:27.643018  103439 command_runner.go:130] > [crio.api]
	I1002 20:48:27.643028  103439 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 20:48:27.643038  103439 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 20:48:27.643047  103439 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 20:48:27.643058  103439 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 20:48:27.643068  103439 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 20:48:27.643081  103439 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 20:48:27.643088  103439 command_runner.go:130] > # stream_port = "0"
	I1002 20:48:27.643100  103439 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 20:48:27.643107  103439 command_runner.go:130] > # stream_enable_tls = false
	I1002 20:48:27.643117  103439 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 20:48:27.643126  103439 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 20:48:27.643137  103439 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 20:48:27.643149  103439 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 20:48:27.643154  103439 command_runner.go:130] > # stream_tls_cert = ""
	I1002 20:48:27.643169  103439 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 20:48:27.643178  103439 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 20:48:27.643188  103439 command_runner.go:130] > # stream_tls_key = ""
	I1002 20:48:27.643205  103439 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 20:48:27.643218  103439 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 20:48:27.643228  103439 command_runner.go:130] > # automatically pick up the changes.
	I1002 20:48:27.643241  103439 command_runner.go:130] > # stream_tls_ca = ""
	I1002 20:48:27.643279  103439 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:48:27.643300  103439 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 20:48:27.643322  103439 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:48:27.643333  103439 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 20:48:27.643343  103439 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 20:48:27.643352  103439 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 20:48:27.643370  103439 command_runner.go:130] > [crio.runtime]
	I1002 20:48:27.643381  103439 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 20:48:27.643393  103439 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 20:48:27.643403  103439 command_runner.go:130] > # "nofile=1024:2048"
	I1002 20:48:27.643414  103439 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 20:48:27.643423  103439 command_runner.go:130] > # default_ulimits = [
	I1002 20:48:27.643428  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643441  103439 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 20:48:27.643450  103439 command_runner.go:130] > # no_pivot = false
	I1002 20:48:27.643460  103439 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 20:48:27.643473  103439 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 20:48:27.643482  103439 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 20:48:27.643494  103439 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 20:48:27.643511  103439 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 20:48:27.643524  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:48:27.643532  103439 command_runner.go:130] > # conmon = ""
	I1002 20:48:27.643539  103439 command_runner.go:130] > # Cgroup setting for conmon
	I1002 20:48:27.643549  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 20:48:27.643556  103439 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 20:48:27.643565  103439 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 20:48:27.643572  103439 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 20:48:27.643582  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:48:27.643588  103439 command_runner.go:130] > # conmon_env = [
	I1002 20:48:27.643592  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643600  103439 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 20:48:27.643612  103439 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 20:48:27.643622  103439 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 20:48:27.643631  103439 command_runner.go:130] > # default_env = [
	I1002 20:48:27.643647  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643661  103439 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 20:48:27.643672  103439 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 20:48:27.643679  103439 command_runner.go:130] > # selinux = false
	I1002 20:48:27.643689  103439 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 20:48:27.643701  103439 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 20:48:27.643710  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643717  103439 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:48:27.643729  103439 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 20:48:27.643755  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643766  103439 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 20:48:27.643777  103439 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 20:48:27.643790  103439 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 20:48:27.643804  103439 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 20:48:27.643815  103439 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 20:48:27.643826  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643834  103439 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 20:48:27.643847  103439 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 20:48:27.643856  103439 command_runner.go:130] > # the cgroup blockio controller.
	I1002 20:48:27.643863  103439 command_runner.go:130] > # blockio_config_file = ""
	I1002 20:48:27.643875  103439 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 20:48:27.643886  103439 command_runner.go:130] > # blockio parameters.
	I1002 20:48:27.643892  103439 command_runner.go:130] > # blockio_reload = false
	I1002 20:48:27.643901  103439 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 20:48:27.643907  103439 command_runner.go:130] > # irqbalance daemon.
	I1002 20:48:27.643914  103439 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 20:48:27.643922  103439 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 20:48:27.643930  103439 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 20:48:27.643939  103439 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 20:48:27.643946  103439 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 20:48:27.643955  103439 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 20:48:27.643967  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643976  103439 command_runner.go:130] > # rdt_config_file = ""
	I1002 20:48:27.643991  103439 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 20:48:27.643998  103439 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 20:48:27.644004  103439 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 20:48:27.644010  103439 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 20:48:27.644016  103439 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 20:48:27.644022  103439 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 20:48:27.644026  103439 command_runner.go:130] > # will be added.
	I1002 20:48:27.644030  103439 command_runner.go:130] > # default_capabilities = [
	I1002 20:48:27.644036  103439 command_runner.go:130] > # 	"CHOWN",
	I1002 20:48:27.644039  103439 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 20:48:27.644042  103439 command_runner.go:130] > # 	"FSETID",
	I1002 20:48:27.644046  103439 command_runner.go:130] > # 	"FOWNER",
	I1002 20:48:27.644049  103439 command_runner.go:130] > # 	"SETGID",
	I1002 20:48:27.644077  103439 command_runner.go:130] > # 	"SETUID",
	I1002 20:48:27.644089  103439 command_runner.go:130] > # 	"SETPCAP",
	I1002 20:48:27.644096  103439 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 20:48:27.644099  103439 command_runner.go:130] > # 	"KILL",
	I1002 20:48:27.644102  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644111  103439 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 20:48:27.644117  103439 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 20:48:27.644124  103439 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 20:48:27.644129  103439 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 20:48:27.644137  103439 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:48:27.644140  103439 command_runner.go:130] > default_sysctls = [
	I1002 20:48:27.644146  103439 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 20:48:27.644149  103439 command_runner.go:130] > ]
	I1002 20:48:27.644153  103439 command_runner.go:130] > # List of devices on the host that a
	I1002 20:48:27.644159  103439 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 20:48:27.644165  103439 command_runner.go:130] > # allowed_devices = [
	I1002 20:48:27.644168  103439 command_runner.go:130] > # 	"/dev/fuse",
	I1002 20:48:27.644172  103439 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 20:48:27.644177  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644181  103439 command_runner.go:130] > # List of additional devices. specified as
	I1002 20:48:27.644194  103439 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 20:48:27.644201  103439 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 20:48:27.644207  103439 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:48:27.644210  103439 command_runner.go:130] > # additional_devices = [
	I1002 20:48:27.644213  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644218  103439 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 20:48:27.644224  103439 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 20:48:27.644227  103439 command_runner.go:130] > # 	"/etc/cdi",
	I1002 20:48:27.644231  103439 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 20:48:27.644235  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644241  103439 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 20:48:27.644249  103439 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 20:48:27.644253  103439 command_runner.go:130] > # Defaults to false.
	I1002 20:48:27.644259  103439 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 20:48:27.644265  103439 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 20:48:27.644272  103439 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 20:48:27.644275  103439 command_runner.go:130] > # hooks_dir = [
	I1002 20:48:27.644280  103439 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 20:48:27.644283  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644289  103439 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 20:48:27.644297  103439 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 20:48:27.644302  103439 command_runner.go:130] > # its default mounts from the following two files:
	I1002 20:48:27.644305  103439 command_runner.go:130] > #
	I1002 20:48:27.644310  103439 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 20:48:27.644323  103439 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 20:48:27.644329  103439 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 20:48:27.644334  103439 command_runner.go:130] > #
	I1002 20:48:27.644340  103439 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 20:48:27.644346  103439 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 20:48:27.644352  103439 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 20:48:27.644356  103439 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 20:48:27.644359  103439 command_runner.go:130] > #
	I1002 20:48:27.644363  103439 command_runner.go:130] > # default_mounts_file = ""
	I1002 20:48:27.644377  103439 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 20:48:27.644385  103439 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 20:48:27.644389  103439 command_runner.go:130] > # pids_limit = -1
	I1002 20:48:27.644397  103439 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 20:48:27.644403  103439 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 20:48:27.644409  103439 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 20:48:27.644418  103439 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 20:48:27.644422  103439 command_runner.go:130] > # log_size_max = -1
	I1002 20:48:27.644430  103439 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 20:48:27.644434  103439 command_runner.go:130] > # log_to_journald = false
	I1002 20:48:27.644439  103439 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 20:48:27.644444  103439 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 20:48:27.644450  103439 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 20:48:27.644454  103439 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 20:48:27.644461  103439 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 20:48:27.644465  103439 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 20:48:27.644470  103439 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 20:48:27.644473  103439 command_runner.go:130] > # read_only = false
	I1002 20:48:27.644482  103439 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 20:48:27.644490  103439 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 20:48:27.644494  103439 command_runner.go:130] > # live configuration reload.
	I1002 20:48:27.644500  103439 command_runner.go:130] > # log_level = "info"
	I1002 20:48:27.644505  103439 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 20:48:27.644509  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.644512  103439 command_runner.go:130] > # log_filter = ""
	I1002 20:48:27.644518  103439 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 20:48:27.644525  103439 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 20:48:27.644529  103439 command_runner.go:130] > # separated by comma.
	I1002 20:48:27.644536  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644542  103439 command_runner.go:130] > # uid_mappings = ""
	I1002 20:48:27.644547  103439 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 20:48:27.644552  103439 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 20:48:27.644559  103439 command_runner.go:130] > # separated by comma.
	I1002 20:48:27.644573  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644579  103439 command_runner.go:130] > # gid_mappings = ""
	I1002 20:48:27.644585  103439 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 20:48:27.644591  103439 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:48:27.644598  103439 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:48:27.644606  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644611  103439 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 20:48:27.644617  103439 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 20:48:27.644625  103439 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:48:27.644631  103439 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:48:27.644640  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644644  103439 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 20:48:27.644652  103439 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 20:48:27.644657  103439 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 20:48:27.644665  103439 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 20:48:27.644668  103439 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 20:48:27.644673  103439 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 20:48:27.644679  103439 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 20:48:27.644686  103439 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 20:48:27.644690  103439 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 20:48:27.644693  103439 command_runner.go:130] > # drop_infra_ctr = true
	I1002 20:48:27.644699  103439 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 20:48:27.644706  103439 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 20:48:27.644712  103439 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 20:48:27.644718  103439 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 20:48:27.644726  103439 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 20:48:27.644733  103439 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 20:48:27.644752  103439 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 20:48:27.644764  103439 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 20:48:27.644769  103439 command_runner.go:130] > # shared_cpuset = ""
	I1002 20:48:27.644777  103439 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 20:48:27.644782  103439 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 20:48:27.644785  103439 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 20:48:27.644798  103439 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 20:48:27.644804  103439 command_runner.go:130] > # pinns_path = ""
	I1002 20:48:27.644810  103439 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 20:48:27.644817  103439 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 20:48:27.644821  103439 command_runner.go:130] > # enable_criu_support = true
	I1002 20:48:27.644826  103439 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 20:48:27.644831  103439 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 20:48:27.644837  103439 command_runner.go:130] > # enable_pod_events = false
	I1002 20:48:27.644842  103439 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 20:48:27.644849  103439 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 20:48:27.644853  103439 command_runner.go:130] > # default_runtime = "crun"
	I1002 20:48:27.644858  103439 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 20:48:27.644867  103439 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 20:48:27.644876  103439 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 20:48:27.644882  103439 command_runner.go:130] > # creation as a file is not desired either.
	I1002 20:48:27.644890  103439 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 20:48:27.644896  103439 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 20:48:27.644900  103439 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 20:48:27.644905  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644911  103439 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 20:48:27.644919  103439 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 20:48:27.644925  103439 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 20:48:27.644930  103439 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 20:48:27.644932  103439 command_runner.go:130] > #
	I1002 20:48:27.644937  103439 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 20:48:27.644943  103439 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 20:48:27.644947  103439 command_runner.go:130] > # runtime_type = "oci"
	I1002 20:48:27.644951  103439 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 20:48:27.644955  103439 command_runner.go:130] > # inherit_default_runtime = false
	I1002 20:48:27.644959  103439 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 20:48:27.644963  103439 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 20:48:27.644968  103439 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 20:48:27.644972  103439 command_runner.go:130] > # monitor_env = []
	I1002 20:48:27.644980  103439 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 20:48:27.644987  103439 command_runner.go:130] > # allowed_annotations = []
	I1002 20:48:27.644992  103439 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 20:48:27.644998  103439 command_runner.go:130] > # no_sync_log = false
	I1002 20:48:27.645001  103439 command_runner.go:130] > # default_annotations = {}
	I1002 20:48:27.645007  103439 command_runner.go:130] > # stream_websockets = false
	I1002 20:48:27.645011  103439 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:48:27.645086  103439 command_runner.go:130] > # Where:
	I1002 20:48:27.645099  103439 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 20:48:27.645104  103439 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 20:48:27.645110  103439 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 20:48:27.645115  103439 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 20:48:27.645119  103439 command_runner.go:130] > #   in $PATH.
	I1002 20:48:27.645124  103439 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 20:48:27.645131  103439 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 20:48:27.645137  103439 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 20:48:27.645142  103439 command_runner.go:130] > #   state.
	I1002 20:48:27.645148  103439 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 20:48:27.645156  103439 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 20:48:27.645161  103439 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 20:48:27.645173  103439 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 20:48:27.645180  103439 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 20:48:27.645186  103439 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 20:48:27.645191  103439 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 20:48:27.645197  103439 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 20:48:27.645205  103439 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 20:48:27.645216  103439 command_runner.go:130] > #   The currently recognized values are:
	I1002 20:48:27.645224  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 20:48:27.645231  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 20:48:27.645239  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 20:48:27.645245  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 20:48:27.645254  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 20:48:27.645259  103439 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 20:48:27.645276  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 20:48:27.645284  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 20:48:27.645296  103439 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 20:48:27.645301  103439 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 20:48:27.645309  103439 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 20:48:27.645320  103439 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 20:48:27.645327  103439 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 20:48:27.645333  103439 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 20:48:27.645341  103439 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 20:48:27.645348  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 20:48:27.645355  103439 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 20:48:27.645360  103439 command_runner.go:130] > #   deprecated option "conmon".
	I1002 20:48:27.645368  103439 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 20:48:27.645373  103439 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 20:48:27.645381  103439 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 20:48:27.645385  103439 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 20:48:27.645392  103439 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 20:48:27.645398  103439 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 20:48:27.645405  103439 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 20:48:27.645410  103439 command_runner.go:130] > #   conmon-rs by using:
	I1002 20:48:27.645417  103439 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 20:48:27.645426  103439 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 20:48:27.645433  103439 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 20:48:27.645441  103439 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 20:48:27.645446  103439 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 20:48:27.645454  103439 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 20:48:27.645461  103439 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 20:48:27.645468  103439 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 20:48:27.645475  103439 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 20:48:27.645484  103439 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 20:48:27.645490  103439 command_runner.go:130] > #   when a machine crash happens.
	I1002 20:48:27.645496  103439 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 20:48:27.645505  103439 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 20:48:27.645517  103439 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 20:48:27.645523  103439 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 20:48:27.645529  103439 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 20:48:27.645542  103439 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 20:48:27.645548  103439 command_runner.go:130] > #
	I1002 20:48:27.645552  103439 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 20:48:27.645555  103439 command_runner.go:130] > #
	I1002 20:48:27.645560  103439 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 20:48:27.645569  103439 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 20:48:27.645573  103439 command_runner.go:130] > #
	I1002 20:48:27.645578  103439 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 20:48:27.645586  103439 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 20:48:27.645589  103439 command_runner.go:130] > #
	I1002 20:48:27.645595  103439 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 20:48:27.645598  103439 command_runner.go:130] > # feature.
	I1002 20:48:27.645601  103439 command_runner.go:130] > #
	I1002 20:48:27.645606  103439 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 20:48:27.645615  103439 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 20:48:27.645622  103439 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 20:48:27.645627  103439 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 20:48:27.645635  103439 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 20:48:27.645637  103439 command_runner.go:130] > #
	I1002 20:48:27.645643  103439 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 20:48:27.645651  103439 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 20:48:27.645653  103439 command_runner.go:130] > #
	I1002 20:48:27.645662  103439 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 20:48:27.645672  103439 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 20:48:27.645676  103439 command_runner.go:130] > #
	I1002 20:48:27.645682  103439 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 20:48:27.645690  103439 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 20:48:27.645693  103439 command_runner.go:130] > # limitation.
	I1002 20:48:27.645697  103439 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 20:48:27.645701  103439 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 20:48:27.645709  103439 command_runner.go:130] > runtime_type = ""
	I1002 20:48:27.645715  103439 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 20:48:27.645725  103439 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:48:27.645731  103439 command_runner.go:130] > runtime_config_path = ""
	I1002 20:48:27.645746  103439 command_runner.go:130] > container_min_memory = ""
	I1002 20:48:27.645754  103439 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:48:27.645762  103439 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:48:27.645768  103439 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:48:27.645777  103439 command_runner.go:130] > allowed_annotations = [
	I1002 20:48:27.645783  103439 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 20:48:27.645788  103439 command_runner.go:130] > ]
	I1002 20:48:27.645792  103439 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:48:27.645796  103439 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 20:48:27.645803  103439 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 20:48:27.645807  103439 command_runner.go:130] > runtime_type = ""
	I1002 20:48:27.645811  103439 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 20:48:27.645815  103439 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:48:27.645818  103439 command_runner.go:130] > runtime_config_path = ""
	I1002 20:48:27.645822  103439 command_runner.go:130] > container_min_memory = ""
	I1002 20:48:27.645826  103439 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:48:27.645830  103439 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:48:27.645834  103439 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:48:27.645838  103439 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:48:27.645844  103439 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 20:48:27.645852  103439 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 20:48:27.645857  103439 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 20:48:27.645866  103439 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 20:48:27.645875  103439 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 20:48:27.645886  103439 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 20:48:27.645894  103439 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 20:48:27.645899  103439 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 20:48:27.645907  103439 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 20:48:27.645917  103439 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 20:48:27.645930  103439 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 20:48:27.645940  103439 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 20:48:27.645943  103439 command_runner.go:130] > # Example:
	I1002 20:48:27.645949  103439 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 20:48:27.645953  103439 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 20:48:27.645960  103439 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 20:48:27.645966  103439 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 20:48:27.645972  103439 command_runner.go:130] > # cpuset = "0-1"
	I1002 20:48:27.645975  103439 command_runner.go:130] > # cpushares = "5"
	I1002 20:48:27.645979  103439 command_runner.go:130] > # cpuquota = "1000"
	I1002 20:48:27.645982  103439 command_runner.go:130] > # cpuperiod = "100000"
	I1002 20:48:27.645986  103439 command_runner.go:130] > # cpulimit = "35"
	I1002 20:48:27.645989  103439 command_runner.go:130] > # Where:
	I1002 20:48:27.645993  103439 command_runner.go:130] > # The workload name is workload-type.
	I1002 20:48:27.646000  103439 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 20:48:27.646006  103439 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 20:48:27.646011  103439 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 20:48:27.646021  103439 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 20:48:27.646026  103439 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 20:48:27.646034  103439 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 20:48:27.646044  103439 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 20:48:27.646052  103439 command_runner.go:130] > # Default value is set to true
	I1002 20:48:27.646058  103439 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 20:48:27.646068  103439 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 20:48:27.646074  103439 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 20:48:27.646083  103439 command_runner.go:130] > # Default value is set to 'false'
	I1002 20:48:27.646092  103439 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 20:48:27.646104  103439 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 20:48:27.646118  103439 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 20:48:27.646127  103439 command_runner.go:130] > # timezone = ""
	I1002 20:48:27.646136  103439 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 20:48:27.646144  103439 command_runner.go:130] > #
	I1002 20:48:27.646158  103439 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 20:48:27.646179  103439 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 20:48:27.646188  103439 command_runner.go:130] > [crio.image]
	I1002 20:48:27.646201  103439 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 20:48:27.646209  103439 command_runner.go:130] > # default_transport = "docker://"
	I1002 20:48:27.646217  103439 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 20:48:27.646225  103439 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:48:27.646229  103439 command_runner.go:130] > # global_auth_file = ""
	I1002 20:48:27.646236  103439 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 20:48:27.646241  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.646248  103439 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.646254  103439 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 20:48:27.646260  103439 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:48:27.646265  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.646271  103439 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 20:48:27.646276  103439 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 20:48:27.646281  103439 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 20:48:27.646289  103439 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 20:48:27.646295  103439 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 20:48:27.646301  103439 command_runner.go:130] > # pause_command = "/pause"
	I1002 20:48:27.646306  103439 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 20:48:27.646316  103439 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 20:48:27.646323  103439 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 20:48:27.646329  103439 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 20:48:27.646336  103439 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 20:48:27.646342  103439 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 20:48:27.646345  103439 command_runner.go:130] > # pinned_images = [
	I1002 20:48:27.646348  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646354  103439 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 20:48:27.646362  103439 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 20:48:27.646368  103439 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 20:48:27.646376  103439 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 20:48:27.646381  103439 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 20:48:27.646386  103439 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 20:48:27.646399  103439 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 20:48:27.646411  103439 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 20:48:27.646423  103439 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 20:48:27.646436  103439 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 20:48:27.646447  103439 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 20:48:27.646458  103439 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 20:48:27.646470  103439 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 20:48:27.646480  103439 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 20:48:27.646486  103439 command_runner.go:130] > # changing them here.
	I1002 20:48:27.646491  103439 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 20:48:27.646497  103439 command_runner.go:130] > # insecure_registries = [
	I1002 20:48:27.646500  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646507  103439 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 20:48:27.646516  103439 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 20:48:27.646522  103439 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 20:48:27.646527  103439 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 20:48:27.646531  103439 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 20:48:27.646538  103439 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 20:48:27.646544  103439 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 20:48:27.646551  103439 command_runner.go:130] > # auto_reload_registries = false
	I1002 20:48:27.646557  103439 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 20:48:27.646571  103439 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 20:48:27.646579  103439 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 20:48:27.646583  103439 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 20:48:27.646590  103439 command_runner.go:130] > # The mode of short name resolution.
	I1002 20:48:27.646596  103439 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 20:48:27.646605  103439 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 20:48:27.646611  103439 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 20:48:27.646615  103439 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 20:48:27.646620  103439 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 20:48:27.646628  103439 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 20:48:27.646632  103439 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 20:48:27.646638  103439 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 20:48:27.646649  103439 command_runner.go:130] > # CNI plugins.
	I1002 20:48:27.646655  103439 command_runner.go:130] > [crio.network]
	I1002 20:48:27.646660  103439 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 20:48:27.646667  103439 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 20:48:27.646671  103439 command_runner.go:130] > # cni_default_network = ""
	I1002 20:48:27.646678  103439 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 20:48:27.646682  103439 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 20:48:27.646690  103439 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 20:48:27.646693  103439 command_runner.go:130] > # plugin_dirs = [
	I1002 20:48:27.646696  103439 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 20:48:27.646699  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646703  103439 command_runner.go:130] > # List of included pod metrics.
	I1002 20:48:27.646709  103439 command_runner.go:130] > # included_pod_metrics = [
	I1002 20:48:27.646711  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646716  103439 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 20:48:27.646722  103439 command_runner.go:130] > [crio.metrics]
	I1002 20:48:27.646726  103439 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 20:48:27.646732  103439 command_runner.go:130] > # enable_metrics = false
	I1002 20:48:27.646752  103439 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 20:48:27.646761  103439 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 20:48:27.646767  103439 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 20:48:27.646775  103439 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 20:48:27.646783  103439 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 20:48:27.646787  103439 command_runner.go:130] > # metrics_collectors = [
	I1002 20:48:27.646793  103439 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 20:48:27.646797  103439 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 20:48:27.646800  103439 command_runner.go:130] > # 	"containers_oom_total",
	I1002 20:48:27.646804  103439 command_runner.go:130] > # 	"processes_defunct",
	I1002 20:48:27.646807  103439 command_runner.go:130] > # 	"operations_total",
	I1002 20:48:27.646811  103439 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 20:48:27.646815  103439 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 20:48:27.646818  103439 command_runner.go:130] > # 	"operations_errors_total",
	I1002 20:48:27.646822  103439 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 20:48:27.646831  103439 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 20:48:27.646835  103439 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 20:48:27.646839  103439 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 20:48:27.646842  103439 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 20:48:27.646846  103439 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 20:48:27.646850  103439 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 20:48:27.646853  103439 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 20:48:27.646857  103439 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 20:48:27.646860  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646868  103439 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 20:48:27.646874  103439 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 20:48:27.646880  103439 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 20:48:27.646886  103439 command_runner.go:130] > # metrics_port = 9090
	I1002 20:48:27.646891  103439 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 20:48:27.646901  103439 command_runner.go:130] > # metrics_socket = ""
	I1002 20:48:27.646909  103439 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 20:48:27.646914  103439 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 20:48:27.646922  103439 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 20:48:27.646928  103439 command_runner.go:130] > # certificate on any modification event.
	I1002 20:48:27.646932  103439 command_runner.go:130] > # metrics_cert = ""
	I1002 20:48:27.646939  103439 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 20:48:27.646943  103439 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 20:48:27.646949  103439 command_runner.go:130] > # metrics_key = ""
	I1002 20:48:27.646954  103439 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 20:48:27.646960  103439 command_runner.go:130] > [crio.tracing]
	I1002 20:48:27.646966  103439 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 20:48:27.646971  103439 command_runner.go:130] > # enable_tracing = false
	I1002 20:48:27.646977  103439 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 20:48:27.646983  103439 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 20:48:27.646993  103439 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 20:48:27.646999  103439 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 20:48:27.647003  103439 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 20:48:27.647009  103439 command_runner.go:130] > [crio.nri]
	I1002 20:48:27.647017  103439 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 20:48:27.647023  103439 command_runner.go:130] > # enable_nri = true
	I1002 20:48:27.647032  103439 command_runner.go:130] > # NRI socket to listen on.
	I1002 20:48:27.647038  103439 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 20:48:27.647042  103439 command_runner.go:130] > # NRI plugin directory to use.
	I1002 20:48:27.647049  103439 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 20:48:27.647053  103439 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 20:48:27.647060  103439 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 20:48:27.647065  103439 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 20:48:27.647584  103439 command_runner.go:130] > # nri_disable_connections = false
	I1002 20:48:27.647654  103439 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 20:48:27.647663  103439 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 20:48:27.647672  103439 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 20:48:27.647686  103439 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 20:48:27.647693  103439 command_runner.go:130] > # NRI default validator configuration.
	I1002 20:48:27.647707  103439 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 20:48:27.647731  103439 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 20:48:27.647757  103439 command_runner.go:130] > # can be restricted/rejected:
	I1002 20:48:27.647770  103439 command_runner.go:130] > # - OCI hook injection
	I1002 20:48:27.647779  103439 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 20:48:27.647792  103439 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 20:48:27.647798  103439 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 20:48:27.647805  103439 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 20:48:27.647819  103439 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 20:48:27.647828  103439 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 20:48:27.647837  103439 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 20:48:27.647841  103439 command_runner.go:130] > #
	I1002 20:48:27.647853  103439 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 20:48:27.647859  103439 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 20:48:27.647866  103439 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 20:48:27.647883  103439 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 20:48:27.647891  103439 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 20:48:27.647898  103439 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 20:48:27.647906  103439 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 20:48:27.647916  103439 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 20:48:27.647921  103439 command_runner.go:130] > # ]
	I1002 20:48:27.647929  103439 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 20:48:27.647939  103439 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 20:48:27.647949  103439 command_runner.go:130] > [crio.stats]
	I1002 20:48:27.647958  103439 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 20:48:27.647966  103439 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 20:48:27.647973  103439 command_runner.go:130] > # stats_collection_period = 0
	I1002 20:48:27.647994  103439 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 20:48:27.648004  103439 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 20:48:27.648009  103439 command_runner.go:130] > # collection_period = 0
	I1002 20:48:27.648051  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627189517Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 20:48:27.648070  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627217069Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 20:48:27.648087  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627236914Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 20:48:27.648106  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627255188Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 20:48:27.648122  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.62731995Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.648141  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627489035Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 20:48:27.648161  103439 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 20:48:27.648318  103439 cni.go:84] Creating CNI manager for ""
	I1002 20:48:27.648331  103439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:48:27.648354  103439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:48:27.648401  103439 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-012915 NodeName:functional-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:48:27.648942  103439 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-012915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:48:27.649009  103439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:48:27.657181  103439 command_runner.go:130] > kubeadm
	I1002 20:48:27.657198  103439 command_runner.go:130] > kubectl
	I1002 20:48:27.657203  103439 command_runner.go:130] > kubelet
	I1002 20:48:27.657948  103439 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:48:27.658013  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:48:27.665603  103439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:48:27.678534  103439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:48:27.691111  103439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:48:27.703366  103439 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:48:27.707046  103439 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 20:48:27.707133  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:27.791376  103439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:48:27.804011  103439 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915 for IP: 192.168.49.2
	I1002 20:48:27.804040  103439 certs.go:195] generating shared ca certs ...
	I1002 20:48:27.804056  103439 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:27.804180  103439 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 20:48:27.804232  103439 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 20:48:27.804241  103439 certs.go:257] generating profile certs ...
	I1002 20:48:27.804334  103439 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key
	I1002 20:48:27.804375  103439 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645
	I1002 20:48:27.804412  103439 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key
	I1002 20:48:27.804424  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:48:27.804435  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:48:27.804453  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:48:27.804469  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:48:27.804481  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:48:27.804494  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:48:27.804506  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:48:27.804518  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:48:27.804560  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 20:48:27.804591  103439 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 20:48:27.804601  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:48:27.804623  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:48:27.804645  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:48:27.804666  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 20:48:27.804704  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:48:27.804729  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 20:48:27.804763  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:27.804780  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 20:48:27.805294  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:48:27.822974  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:48:27.840455  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:48:27.858368  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:48:27.877146  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:48:27.895282  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:48:27.912487  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:48:27.929452  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:48:27.947144  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 20:48:27.964177  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:48:27.981785  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 20:48:27.999006  103439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:48:28.011646  103439 ssh_runner.go:195] Run: openssl version
	I1002 20:48:28.017389  103439 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 20:48:28.017621  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 20:48:28.025902  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029403  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029446  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029489  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.063085  103439 command_runner.go:130] > 3ec20f2e
	I1002 20:48:28.063182  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:48:28.071431  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:48:28.080075  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083770  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083829  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083901  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.117894  103439 command_runner.go:130] > b5213941
	I1002 20:48:28.117982  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:48:28.126480  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 20:48:28.135075  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138711  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138759  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138809  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.172582  103439 command_runner.go:130] > 51391683
	I1002 20:48:28.172931  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 20:48:28.180914  103439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:48:28.184555  103439 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:48:28.184579  103439 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 20:48:28.184588  103439 command_runner.go:130] > Device: 8,1	Inode: 811435      Links: 1
	I1002 20:48:28.184598  103439 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:48:28.184608  103439 command_runner.go:130] > Access: 2025-10-02 20:44:21.070069799 +0000
	I1002 20:48:28.184616  103439 command_runner.go:130] > Modify: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184623  103439 command_runner.go:130] > Change: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184628  103439 command_runner.go:130] >  Birth: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184684  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:48:28.218476  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.218920  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:48:28.253813  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.254026  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:48:28.288477  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.288852  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:48:28.322969  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.323293  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:48:28.357073  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.357354  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:48:28.390854  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.391133  103439 kubeadm.go:400] StartCluster: {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:28.391217  103439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:48:28.391280  103439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:48:28.420217  103439 cri.go:89] found id: ""
	I1002 20:48:28.420280  103439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:48:28.427672  103439 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 20:48:28.427700  103439 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 20:48:28.427710  103439 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 20:48:28.428396  103439 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:48:28.428413  103439 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:48:28.428455  103439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:48:28.435936  103439 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:48:28.436039  103439 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-012915" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.436106  103439 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "functional-012915" cluster setting kubeconfig missing "functional-012915" context setting]
	I1002 20:48:28.436458  103439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.437072  103439 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.437245  103439 kapi.go:59] client config for functional-012915: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:48:28.437717  103439 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:48:28.437732  103439 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:48:28.437753  103439 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:48:28.437760  103439 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:48:28.437765  103439 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:48:28.437782  103439 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:48:28.438160  103439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:48:28.446094  103439 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:48:28.446137  103439 kubeadm.go:601] duration metric: took 17.717766ms to restartPrimaryControlPlane
	I1002 20:48:28.446149  103439 kubeadm.go:402] duration metric: took 55.025148ms to StartCluster
	I1002 20:48:28.446168  103439 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.446285  103439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.447035  103439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.447291  103439 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:48:28.447487  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:28.447429  103439 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:48:28.447531  103439 addons.go:69] Setting storage-provisioner=true in profile "functional-012915"
	I1002 20:48:28.447538  103439 addons.go:69] Setting default-storageclass=true in profile "functional-012915"
	I1002 20:48:28.447553  103439 addons.go:238] Setting addon storage-provisioner=true in "functional-012915"
	I1002 20:48:28.447556  103439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-012915"
	I1002 20:48:28.447587  103439 host.go:66] Checking if "functional-012915" exists ...
	I1002 20:48:28.447847  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.447963  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.456904  103439 out.go:179] * Verifying Kubernetes components...
	I1002 20:48:28.458283  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:28.468928  103439 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.469101  103439 kapi.go:59] client config for functional-012915: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:48:28.469369  103439 addons.go:238] Setting addon default-storageclass=true in "functional-012915"
	I1002 20:48:28.469428  103439 host.go:66] Checking if "functional-012915" exists ...
	I1002 20:48:28.469783  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.469862  103439 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:48:28.471474  103439 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:28.471499  103439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:48:28.471557  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:28.496201  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:28.497174  103439 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.497196  103439 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:48:28.497262  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:28.518487  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:28.562123  103439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:48:28.575162  103439 node_ready.go:35] waiting up to 6m0s for node "functional-012915" to be "Ready" ...
	I1002 20:48:28.575316  103439 type.go:168] "Request Body" body=""
	I1002 20:48:28.575388  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:28.575672  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:28.608117  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:28.625656  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.661232  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.663490  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.663556  103439 retry.go:31] will retry after 361.771557ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.679351  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.679399  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.679416  103439 retry.go:31] will retry after 152.242547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.831815  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.883542  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.883591  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.883623  103439 retry.go:31] will retry after 207.681653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.025956  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.075113  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.076262  103439 type.go:168] "Request Body" body=""
	I1002 20:48:29.076342  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:29.076623  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:29.077506  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.077533  103439 retry.go:31] will retry after 323.914971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.091861  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:29.140394  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.142831  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.142876  103439 retry.go:31] will retry after 594.351303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.402253  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.454867  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.454924  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.454957  103439 retry.go:31] will retry after 314.476021ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.576263  103439 type.go:168] "Request Body" body=""
	I1002 20:48:29.576411  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:29.576803  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:29.738004  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:29.769756  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.788694  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.790987  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.791025  103439 retry.go:31] will retry after 1.197724944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.822453  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.822502  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.822528  103439 retry.go:31] will retry after 662.931836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.075955  103439 type.go:168] "Request Body" body=""
	I1002 20:48:30.076032  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:30.076409  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:30.485957  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:30.538516  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:30.538557  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.538578  103439 retry.go:31] will retry after 1.629504367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.575804  103439 type.go:168] "Request Body" body=""
	I1002 20:48:30.575880  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:30.576213  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:30.576271  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:30.989890  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:31.043558  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:31.043619  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.043637  103439 retry.go:31] will retry after 801.444903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.075880  103439 type.go:168] "Request Body" body=""
	I1002 20:48:31.075960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:31.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:31.576114  103439 type.go:168] "Request Body" body=""
	I1002 20:48:31.576220  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:31.576603  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:31.845951  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:31.899339  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:31.899391  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.899410  103439 retry.go:31] will retry after 2.181457366s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.075827  103439 type.go:168] "Request Body" body=""
	I1002 20:48:32.075931  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:32.076334  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:32.168648  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:32.220495  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:32.220539  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.220557  103439 retry.go:31] will retry after 1.373851602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.576076  103439 type.go:168] "Request Body" body=""
	I1002 20:48:32.576161  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:32.576533  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:32.576599  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:33.076393  103439 type.go:168] "Request Body" body=""
	I1002 20:48:33.076488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:33.076861  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:33.575655  103439 type.go:168] "Request Body" body=""
	I1002 20:48:33.575875  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:33.576337  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:33.595591  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:33.646012  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:33.648297  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:33.648332  103439 retry.go:31] will retry after 3.090030694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.075896  103439 type.go:168] "Request Body" body=""
	I1002 20:48:34.075981  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:34.076263  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:34.081465  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:34.133647  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:34.133724  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.133770  103439 retry.go:31] will retry after 3.497111827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.576313  103439 type.go:168] "Request Body" body=""
	I1002 20:48:34.576409  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:34.576832  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:34.576893  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:35.075636  103439 type.go:168] "Request Body" body=""
	I1002 20:48:35.075732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:35.076135  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:35.575728  103439 type.go:168] "Request Body" body=""
	I1002 20:48:35.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:35.576239  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.076110  103439 type.go:168] "Request Body" body=""
	I1002 20:48:36.076196  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:36.076574  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.575482  103439 type.go:168] "Request Body" body=""
	I1002 20:48:36.575578  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:36.575974  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.739297  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:36.791716  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:36.791786  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:36.791808  103439 retry.go:31] will retry after 4.619526112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:37.076288  103439 type.go:168] "Request Body" body=""
	I1002 20:48:37.076368  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:37.076721  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:37.076814  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:37.576414  103439 type.go:168] "Request Body" body=""
	I1002 20:48:37.576492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:37.576867  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:37.632068  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:37.685537  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:37.685582  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:37.685612  103439 retry.go:31] will retry after 3.179037423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:38.076157  103439 type.go:168] "Request Body" body=""
	I1002 20:48:38.076230  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:38.076633  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:38.576327  103439 type.go:168] "Request Body" body=""
	I1002 20:48:38.576425  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:38.576797  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:39.075409  103439 type.go:168] "Request Body" body=""
	I1002 20:48:39.075492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:39.075858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:39.575455  103439 type.go:168] "Request Body" body=""
	I1002 20:48:39.575567  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:39.575934  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:39.576000  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:40.075790  103439 type.go:168] "Request Body" body=""
	I1002 20:48:40.075873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:40.076280  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:40.575900  103439 type.go:168] "Request Body" body=""
	I1002 20:48:40.575982  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:40.576339  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:40.865793  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:40.922102  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:40.922154  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:40.922173  103439 retry.go:31] will retry after 8.017978865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.075452  103439 type.go:168] "Request Body" body=""
	I1002 20:48:41.075541  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:41.075959  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:41.412402  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:41.462892  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:41.465283  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.465317  103439 retry.go:31] will retry after 6.722422885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.575519  103439 type.go:168] "Request Body" body=""
	I1002 20:48:41.575606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:41.575978  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:41.576042  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:42.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:48:42.075773  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:42.076256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:42.575731  103439 type.go:168] "Request Body" body=""
	I1002 20:48:42.575835  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:42.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:43.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:48:43.076025  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:43.076442  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:43.576156  103439 type.go:168] "Request Body" body=""
	I1002 20:48:43.576250  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:43.576635  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:43.576711  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:44.076306  103439 type.go:168] "Request Body" body=""
	I1002 20:48:44.076398  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:44.076835  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:44.575484  103439 type.go:168] "Request Body" body=""
	I1002 20:48:44.575566  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:44.575930  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:45.075679  103439 type.go:168] "Request Body" body=""
	I1002 20:48:45.075780  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:45.076197  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:45.575843  103439 type.go:168] "Request Body" body=""
	I1002 20:48:45.575922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:45.576287  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:46.075882  103439 type.go:168] "Request Body" body=""
	I1002 20:48:46.075956  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:46.076307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:46.076367  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:46.576093  103439 type.go:168] "Request Body" body=""
	I1002 20:48:46.576194  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:46.576549  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:47.076247  103439 type.go:168] "Request Body" body=""
	I1002 20:48:47.076328  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:47.076667  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:47.576364  103439 type.go:168] "Request Body" body=""
	I1002 20:48:47.576474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:47.576869  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:48.075470  103439 type.go:168] "Request Body" body=""
	I1002 20:48:48.075556  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:48.075935  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:48.188198  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:48.240819  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:48.240876  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.240960  103439 retry.go:31] will retry after 5.203774684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.575470  103439 type.go:168] "Request Body" body=""
	I1002 20:48:48.575548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:48.575916  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:48.575985  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:48.940390  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:48.992334  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:48.994935  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.994965  103439 retry.go:31] will retry after 7.700365391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:49.076327  103439 type.go:168] "Request Body" body=""
	I1002 20:48:49.076416  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:49.076830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:49.575454  103439 type.go:168] "Request Body" body=""
	I1002 20:48:49.575554  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:49.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:50.075711  103439 type.go:168] "Request Body" body=""
	I1002 20:48:50.075826  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:50.076249  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:50.575864  103439 type.go:168] "Request Body" body=""
	I1002 20:48:50.575961  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:50.576351  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:50.576415  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:51.076075  103439 type.go:168] "Request Body" body=""
	I1002 20:48:51.076176  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:51.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:51.575972  103439 type.go:168] "Request Body" body=""
	I1002 20:48:51.576054  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:51.576387  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:52.076055  103439 type.go:168] "Request Body" body=""
	I1002 20:48:52.076146  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:52.076526  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:52.576203  103439 type.go:168] "Request Body" body=""
	I1002 20:48:52.576289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:52.576688  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:52.576771  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:53.076363  103439 type.go:168] "Request Body" body=""
	I1002 20:48:53.076444  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:53.076831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:53.445247  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:53.496043  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:53.498518  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:53.498561  103439 retry.go:31] will retry after 18.668445084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:53.575895  103439 type.go:168] "Request Body" body=""
	I1002 20:48:53.575974  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:53.576330  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:54.076074  103439 type.go:168] "Request Body" body=""
	I1002 20:48:54.076158  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:54.076568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:54.576230  103439 type.go:168] "Request Body" body=""
	I1002 20:48:54.576305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:54.576631  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:55.075724  103439 type.go:168] "Request Body" body=""
	I1002 20:48:55.075820  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:55.076207  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:55.076287  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:55.575835  103439 type.go:168] "Request Body" body=""
	I1002 20:48:55.575924  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:55.576280  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.075883  103439 type.go:168] "Request Body" body=""
	I1002 20:48:56.075963  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:56.076361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.576037  103439 type.go:168] "Request Body" body=""
	I1002 20:48:56.576120  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:56.576513  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.695837  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:56.749495  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:56.749534  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:56.749553  103439 retry.go:31] will retry after 17.757887541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:57.076066  103439 type.go:168] "Request Body" body=""
	I1002 20:48:57.076153  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:57.076611  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:57.076679  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:57.576325  103439 type.go:168] "Request Body" body=""
	I1002 20:48:57.576416  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:57.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:58.076237  103439 type.go:168] "Request Body" body=""
	I1002 20:48:58.076314  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:58.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:58.575412  103439 type.go:168] "Request Body" body=""
	I1002 20:48:58.575504  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:58.575865  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:59.075437  103439 type.go:168] "Request Body" body=""
	I1002 20:48:59.075528  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:59.075976  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:59.575438  103439 type.go:168] "Request Body" body=""
	I1002 20:48:59.575539  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:59.575952  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:59.576014  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:00.075849  103439 type.go:168] "Request Body" body=""
	I1002 20:49:00.075928  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:00.076266  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:00.575974  103439 type.go:168] "Request Body" body=""
	I1002 20:49:00.576072  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:00.576461  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:01.076180  103439 type.go:168] "Request Body" body=""
	I1002 20:49:01.076280  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:01.076643  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:01.576370  103439 type.go:168] "Request Body" body=""
	I1002 20:49:01.576466  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:01.576896  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:01.576970  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:02.075515  103439 type.go:168] "Request Body" body=""
	I1002 20:49:02.075606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:02.075985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:02.575600  103439 type.go:168] "Request Body" body=""
	I1002 20:49:02.575686  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:02.576112  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:03.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:49:03.075769  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:03.076121  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:03.575712  103439 type.go:168] "Request Body" body=""
	I1002 20:49:03.575846  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:03.576202  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:04.075891  103439 type.go:168] "Request Body" body=""
	I1002 20:49:04.075970  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:04.076322  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:04.076381  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:04.576087  103439 type.go:168] "Request Body" body=""
	I1002 20:49:04.576249  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:04.576616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:05.075403  103439 type.go:168] "Request Body" body=""
	I1002 20:49:05.075481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:05.075839  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:05.575464  103439 type.go:168] "Request Body" body=""
	I1002 20:49:05.575572  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:05.575972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:06.075594  103439 type.go:168] "Request Body" body=""
	I1002 20:49:06.075677  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:06.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:06.575658  103439 type.go:168] "Request Body" body=""
	I1002 20:49:06.575767  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:06.576141  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:06.576200  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:07.075781  103439 type.go:168] "Request Body" body=""
	I1002 20:49:07.075865  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:07.076245  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:07.575885  103439 type.go:168] "Request Body" body=""
	I1002 20:49:07.575974  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:07.576361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:08.075998  103439 type.go:168] "Request Body" body=""
	I1002 20:49:08.076084  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:08.076429  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:08.576307  103439 type.go:168] "Request Body" body=""
	I1002 20:49:08.576413  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:08.576814  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:08.576876  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:09.075362  103439 type.go:168] "Request Body" body=""
	I1002 20:49:09.075437  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:09.075799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:09.575387  103439 type.go:168] "Request Body" body=""
	I1002 20:49:09.575482  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:09.575850  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:10.075783  103439 type.go:168] "Request Body" body=""
	I1002 20:49:10.075869  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:10.076249  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:10.575831  103439 type.go:168] "Request Body" body=""
	I1002 20:49:10.575935  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:10.576353  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:11.076044  103439 type.go:168] "Request Body" body=""
	I1002 20:49:11.076133  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:11.076599  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:11.076668  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:11.576237  103439 type.go:168] "Request Body" body=""
	I1002 20:49:11.576331  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:11.576683  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:12.076335  103439 type.go:168] "Request Body" body=""
	I1002 20:49:12.076430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:12.076838  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:12.168044  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:12.220925  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:12.220980  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:12.221004  103439 retry.go:31] will retry after 18.69466529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:12.575446  103439 type.go:168] "Request Body" body=""
	I1002 20:49:12.575535  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:12.575932  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:13.075529  103439 type.go:168] "Request Body" body=""
	I1002 20:49:13.075604  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:13.075957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:13.575562  103439 type.go:168] "Request Body" body=""
	I1002 20:49:13.575652  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:13.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:13.576135  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:14.075639  103439 type.go:168] "Request Body" body=""
	I1002 20:49:14.075761  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:14.076134  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:14.507714  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:49:14.560377  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:14.560441  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:14.560472  103439 retry.go:31] will retry after 29.222161527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:14.575630  103439 type.go:168] "Request Body" body=""
	I1002 20:49:14.575695  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:14.575976  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:15.075906  103439 type.go:168] "Request Body" body=""
	I1002 20:49:15.075982  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:15.076361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:15.575992  103439 type.go:168] "Request Body" body=""
	I1002 20:49:15.576071  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:15.576414  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:15.576474  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:16.076107  103439 type.go:168] "Request Body" body=""
	I1002 20:49:16.076212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:16.076649  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:16.576307  103439 type.go:168] "Request Body" body=""
	I1002 20:49:16.576391  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:16.576715  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:17.076322  103439 type.go:168] "Request Body" body=""
	I1002 20:49:17.076405  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:17.076824  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:17.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:49:17.575561  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:17.575924  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:18.076218  103439 type.go:168] "Request Body" body=""
	I1002 20:49:18.076306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:18.076654  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:18.076715  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:18.576306  103439 type.go:168] "Request Body" body=""
	I1002 20:49:18.576386  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:18.576768  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:19.075340  103439 type.go:168] "Request Body" body=""
	I1002 20:49:19.075428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:19.075806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:19.575441  103439 type.go:168] "Request Body" body=""
	I1002 20:49:19.575527  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:19.575944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:20.075821  103439 type.go:168] "Request Body" body=""
	I1002 20:49:20.075922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:20.076321  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:20.575880  103439 type.go:168] "Request Body" body=""
	I1002 20:49:20.575960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:20.576302  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:20.576377  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:21.075989  103439 type.go:168] "Request Body" body=""
	I1002 20:49:21.076074  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:21.076448  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:21.576110  103439 type.go:168] "Request Body" body=""
	I1002 20:49:21.576185  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:21.576542  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:22.076165  103439 type.go:168] "Request Body" body=""
	I1002 20:49:22.076244  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:22.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:22.576228  103439 type.go:168] "Request Body" body=""
	I1002 20:49:22.576309  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:22.576640  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:22.576699  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:23.076279  103439 type.go:168] "Request Body" body=""
	I1002 20:49:23.076364  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:23.076694  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:23.576332  103439 type.go:168] "Request Body" body=""
	I1002 20:49:23.576406  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:23.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:24.075380  103439 type.go:168] "Request Body" body=""
	I1002 20:49:24.075461  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:24.075821  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:24.575420  103439 type.go:168] "Request Body" body=""
	I1002 20:49:24.575507  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:24.575886  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:25.075625  103439 type.go:168] "Request Body" body=""
	I1002 20:49:25.075705  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:25.076135  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:25.076213  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:25.575710  103439 type.go:168] "Request Body" body=""
	I1002 20:49:25.575827  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:25.576189  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:26.075726  103439 type.go:168] "Request Body" body=""
	I1002 20:49:26.075816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:26.076175  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:26.575753  103439 type.go:168] "Request Body" body=""
	I1002 20:49:26.575829  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:26.576180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:27.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:49:27.075799  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:27.076197  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:27.076268  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:27.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:49:27.575897  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:27.576231  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:28.075845  103439 type.go:168] "Request Body" body=""
	I1002 20:49:28.075929  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:28.076311  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:28.576131  103439 type.go:168] "Request Body" body=""
	I1002 20:49:28.576205  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:28.576567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:29.076227  103439 type.go:168] "Request Body" body=""
	I1002 20:49:29.076317  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:29.076686  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:29.076777  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:29.576355  103439 type.go:168] "Request Body" body=""
	I1002 20:49:29.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:29.576786  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.075478  103439 type.go:168] "Request Body" body=""
	I1002 20:49:30.075569  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:30.075933  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.575478  103439 type.go:168] "Request Body" body=""
	I1002 20:49:30.575586  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:30.575938  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.916459  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:30.966432  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:30.968861  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:30.968901  103439 retry.go:31] will retry after 21.359119468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:31.076302  103439 type.go:168] "Request Body" body=""
	I1002 20:49:31.076392  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:31.076792  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:31.076872  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:31.575376  103439 type.go:168] "Request Body" body=""
	I1002 20:49:31.575450  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:31.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:32.075414  103439 type.go:168] "Request Body" body=""
	I1002 20:49:32.075517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:32.075902  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:32.575509  103439 type.go:168] "Request Body" body=""
	I1002 20:49:32.575602  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:32.575991  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:33.075769  103439 type.go:168] "Request Body" body=""
	I1002 20:49:33.075863  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:33.076201  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:33.576065  103439 type.go:168] "Request Body" body=""
	I1002 20:49:33.576159  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:33.576529  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:33.576605  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:34.076395  103439 type.go:168] "Request Body" body=""
	I1002 20:49:34.076474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:34.076849  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:34.575597  103439 type.go:168] "Request Body" body=""
	I1002 20:49:34.575671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:34.576060  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:35.075844  103439 type.go:168] "Request Body" body=""
	I1002 20:49:35.075929  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:35.076305  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:35.576145  103439 type.go:168] "Request Body" body=""
	I1002 20:49:35.576226  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:35.576568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:36.075334  103439 type.go:168] "Request Body" body=""
	I1002 20:49:36.075411  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:36.075806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:36.075863  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:36.575603  103439 type.go:168] "Request Body" body=""
	I1002 20:49:36.575675  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:36.576026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:37.075815  103439 type.go:168] "Request Body" body=""
	I1002 20:49:37.075895  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:37.076296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:37.576133  103439 type.go:168] "Request Body" body=""
	I1002 20:49:37.576211  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:37.576551  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:38.076393  103439 type.go:168] "Request Body" body=""
	I1002 20:49:38.076464  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:38.076847  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:38.076908  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:38.575667  103439 type.go:168] "Request Body" body=""
	I1002 20:49:38.575774  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:38.576122  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:39.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:49:39.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:39.076312  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:39.576198  103439 type.go:168] "Request Body" body=""
	I1002 20:49:39.576287  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:39.576659  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:40.075460  103439 type.go:168] "Request Body" body=""
	I1002 20:49:40.075544  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:40.075914  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:40.575679  103439 type.go:168] "Request Body" body=""
	I1002 20:49:40.575789  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:40.576134  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:40.576211  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:41.076023  103439 type.go:168] "Request Body" body=""
	I1002 20:49:41.076108  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:41.076444  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:41.576264  103439 type.go:168] "Request Body" body=""
	I1002 20:49:41.576340  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:41.576673  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:42.075461  103439 type.go:168] "Request Body" body=""
	I1002 20:49:42.075562  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:42.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:42.575679  103439 type.go:168] "Request Body" body=""
	I1002 20:49:42.575775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:42.576136  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:43.075963  103439 type.go:168] "Request Body" body=""
	I1002 20:49:43.076038  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:43.076375  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:43.076439  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:43.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:49:43.576333  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:43.576694  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:43.782991  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:49:43.835836  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:43.835901  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:43.835926  103439 retry.go:31] will retry after 22.850861202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:44.076251  103439 type.go:168] "Request Body" body=""
	I1002 20:49:44.076330  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:44.076662  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:44.576378  103439 type.go:168] "Request Body" body=""
	I1002 20:49:44.576459  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:44.576851  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:45.075622  103439 type.go:168] "Request Body" body=""
	I1002 20:49:45.075712  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:45.076088  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:45.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:49:45.575872  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:45.576194  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:45.576263  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:46.075799  103439 type.go:168] "Request Body" body=""
	I1002 20:49:46.075878  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:46.076248  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:46.576106  103439 type.go:168] "Request Body" body=""
	I1002 20:49:46.576212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:46.576565  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:47.075364  103439 type.go:168] "Request Body" body=""
	I1002 20:49:47.075444  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:47.075796  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:47.575534  103439 type.go:168] "Request Body" body=""
	I1002 20:49:47.575641  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:47.576000  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:48.075765  103439 type.go:168] "Request Body" body=""
	I1002 20:49:48.075841  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:48.076173  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:48.076233  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:48.576031  103439 type.go:168] "Request Body" body=""
	I1002 20:49:48.576136  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:48.576523  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:49.076388  103439 type.go:168] "Request Body" body=""
	I1002 20:49:49.076470  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:49.076836  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:49.575635  103439 type.go:168] "Request Body" body=""
	I1002 20:49:49.575728  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:49.576118  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:50.075933  103439 type.go:168] "Request Body" body=""
	I1002 20:49:50.076012  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:50.076363  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:50.076472  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:50.576327  103439 type.go:168] "Request Body" body=""
	I1002 20:49:50.576425  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:50.576803  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:51.075548  103439 type.go:168] "Request Body" body=""
	I1002 20:49:51.075627  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:51.075982  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:51.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:49:51.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:51.576150  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:52.075977  103439 type.go:168] "Request Body" body=""
	I1002 20:49:52.076055  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:52.076435  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:52.076515  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:52.328832  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:52.382480  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:52.382546  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:52.382704  103439 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:49:52.575971  103439 type.go:168] "Request Body" body=""
	I1002 20:49:52.576051  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:52.576411  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:53.076086  103439 type.go:168] "Request Body" body=""
	I1002 20:49:53.076192  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:53.076567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:53.576218  103439 type.go:168] "Request Body" body=""
	I1002 20:49:53.576298  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:53.576641  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:54.076333  103439 type.go:168] "Request Body" body=""
	I1002 20:49:54.076427  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:54.076837  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:54.076901  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:54.575348  103439 type.go:168] "Request Body" body=""
	I1002 20:49:54.575429  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:54.575793  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:55.075831  103439 type.go:168] "Request Body" body=""
	I1002 20:49:55.075927  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:55.076284  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:55.575878  103439 type.go:168] "Request Body" body=""
	I1002 20:49:55.575952  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:55.576307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:56.075954  103439 type.go:168] "Request Body" body=""
	I1002 20:49:56.076056  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:56.076429  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:56.576071  103439 type.go:168] "Request Body" body=""
	I1002 20:49:56.576174  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:56.576511  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:56.576569  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:57.076167  103439 type.go:168] "Request Body" body=""
	I1002 20:49:57.076292  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:57.076654  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:57.576317  103439 type.go:168] "Request Body" body=""
	I1002 20:49:57.576399  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:57.576791  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:58.075329  103439 type.go:168] "Request Body" body=""
	I1002 20:49:58.075426  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:58.075862  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:58.575784  103439 type.go:168] "Request Body" body=""
	I1002 20:49:58.575888  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:58.576288  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:59.075625  103439 type.go:168] "Request Body" body=""
	I1002 20:49:59.075696  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:59.076065  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:59.076136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:59.575793  103439 type.go:168] "Request Body" body=""
	I1002 20:49:59.575892  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:59.576323  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:00.076176  103439 type.go:168] "Request Body" body=""
	I1002 20:50:00.076256  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:00.076616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:00.575379  103439 type.go:168] "Request Body" body=""
	I1002 20:50:00.575456  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:00.575877  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:01.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:50:01.075760  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:01.076169  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:01.076232  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:01.576062  103439 type.go:168] "Request Body" body=""
	I1002 20:50:01.576155  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:01.576520  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:02.076405  103439 type.go:168] "Request Body" body=""
	I1002 20:50:02.076489  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:02.076943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:02.575716  103439 type.go:168] "Request Body" body=""
	I1002 20:50:02.575817  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:02.576177  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:03.076017  103439 type.go:168] "Request Body" body=""
	I1002 20:50:03.076108  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:03.076545  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:03.076613  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:03.575378  103439 type.go:168] "Request Body" body=""
	I1002 20:50:03.575465  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:03.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:04.075550  103439 type.go:168] "Request Body" body=""
	I1002 20:50:04.075623  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:04.076010  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:04.575808  103439 type.go:168] "Request Body" body=""
	I1002 20:50:04.575945  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:04.576301  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:05.076207  103439 type.go:168] "Request Body" body=""
	I1002 20:50:05.076281  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:05.076634  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:05.076700  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:05.575445  103439 type.go:168] "Request Body" body=""
	I1002 20:50:05.575527  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:05.575953  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.075700  103439 type.go:168] "Request Body" body=""
	I1002 20:50:06.075799  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:06.076172  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.575978  103439 type.go:168] "Request Body" body=""
	I1002 20:50:06.576053  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:06.576423  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.687689  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:50:06.737429  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:50:06.739791  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:50:06.739905  103439 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:50:06.742850  103439 out.go:179] * Enabled addons: 
	I1002 20:50:06.744531  103439 addons.go:514] duration metric: took 1m38.297120179s for enable addons: enabled=[]
	I1002 20:50:07.076348  103439 type.go:168] "Request Body" body=""
	I1002 20:50:07.076424  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:07.076810  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:07.076887  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:07.575585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:07.575664  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:07.576013  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:08.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:50:08.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:08.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:08.576074  103439 type.go:168] "Request Body" body=""
	I1002 20:50:08.576184  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:08.576885  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:09.075637  103439 type.go:168] "Request Body" body=""
	I1002 20:50:09.075726  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:09.076126  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:09.575856  103439 type.go:168] "Request Body" body=""
	I1002 20:50:09.575938  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:09.576289  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:09.576365  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:10.076213  103439 type.go:168] "Request Body" body=""
	I1002 20:50:10.076289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:10.076668  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:10.575384  103439 type.go:168] "Request Body" body=""
	I1002 20:50:10.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:10.575843  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:11.075634  103439 type.go:168] "Request Body" body=""
	I1002 20:50:11.075712  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:11.076109  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:11.575835  103439 type.go:168] "Request Body" body=""
	I1002 20:50:11.575921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:11.576276  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:12.076113  103439 type.go:168] "Request Body" body=""
	I1002 20:50:12.076186  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:12.076607  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:12.076677  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:12.575967  103439 type.go:168] "Request Body" body=""
	I1002 20:50:12.576054  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:12.576464  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:13.076341  103439 type.go:168] "Request Body" body=""
	I1002 20:50:13.076412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:13.076780  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:13.575533  103439 type.go:168] "Request Body" body=""
	I1002 20:50:13.575606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:13.576033  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:14.075814  103439 type.go:168] "Request Body" body=""
	I1002 20:50:14.075900  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:14.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:14.576194  103439 type.go:168] "Request Body" body=""
	I1002 20:50:14.576290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:14.576629  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:14.576695  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:15.075361  103439 type.go:168] "Request Body" body=""
	I1002 20:50:15.075442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:15.075840  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:15.575616  103439 type.go:168] "Request Body" body=""
	I1002 20:50:15.575700  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:15.576070  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:16.075838  103439 type.go:168] "Request Body" body=""
	I1002 20:50:16.075936  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:16.076365  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:16.576255  103439 type.go:168] "Request Body" body=""
	I1002 20:50:16.576335  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:16.576673  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:16.576732  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:17.075466  103439 type.go:168] "Request Body" body=""
	I1002 20:50:17.075545  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:17.075956  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:17.575727  103439 type.go:168] "Request Body" body=""
	I1002 20:50:17.575832  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:17.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:18.076032  103439 type.go:168] "Request Body" body=""
	I1002 20:50:18.076123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:18.076487  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:18.576201  103439 type.go:168] "Request Body" body=""
	I1002 20:50:18.576280  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:18.576630  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:19.075359  103439 type.go:168] "Request Body" body=""
	I1002 20:50:19.075436  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:19.075879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:19.075940  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:19.575662  103439 type.go:168] "Request Body" body=""
	I1002 20:50:19.575765  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:19.576112  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:20.075942  103439 type.go:168] "Request Body" body=""
	I1002 20:50:20.076022  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:20.076365  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:20.576167  103439 type.go:168] "Request Body" body=""
	I1002 20:50:20.576281  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:20.576638  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:21.075449  103439 type.go:168] "Request Body" body=""
	I1002 20:50:21.075533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:21.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:21.076012  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:21.575710  103439 type.go:168] "Request Body" body=""
	I1002 20:50:21.575816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:21.576163  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:22.076027  103439 type.go:168] "Request Body" body=""
	I1002 20:50:22.076112  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:22.076486  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:22.576328  103439 type.go:168] "Request Body" body=""
	I1002 20:50:22.576406  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:22.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:23.075575  103439 type.go:168] "Request Body" body=""
	I1002 20:50:23.075653  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:23.076015  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:23.076102  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:23.575919  103439 type.go:168] "Request Body" body=""
	I1002 20:50:23.576001  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:23.576441  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:24.076301  103439 type.go:168] "Request Body" body=""
	I1002 20:50:24.076385  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:24.076732  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:24.575497  103439 type.go:168] "Request Body" body=""
	I1002 20:50:24.575575  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:24.575977  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:25.075906  103439 type.go:168] "Request Body" body=""
	I1002 20:50:25.076002  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:25.076372  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:25.076430  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:25.575772  103439 type.go:168] "Request Body" body=""
	I1002 20:50:25.575847  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:25.576205  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:26.075989  103439 type.go:168] "Request Body" body=""
	I1002 20:50:26.076058  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:26.076440  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:26.576301  103439 type.go:168] "Request Body" body=""
	I1002 20:50:26.576389  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:26.576734  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:27.075548  103439 type.go:168] "Request Body" body=""
	I1002 20:50:27.075630  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:27.076087  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:27.575871  103439 type.go:168] "Request Body" body=""
	I1002 20:50:27.575960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:27.576295  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:27.576366  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:28.075983  103439 type.go:168] "Request Body" body=""
	I1002 20:50:28.076395  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:28.076839  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:28.575729  103439 type.go:168] "Request Body" body=""
	I1002 20:50:28.575838  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:28.576242  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:29.075826  103439 type.go:168] "Request Body" body=""
	I1002 20:50:29.075899  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:29.076269  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:29.576058  103439 type.go:168] "Request Body" body=""
	I1002 20:50:29.576161  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:29.576557  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:29.576620  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:30.075394  103439 type.go:168] "Request Body" body=""
	I1002 20:50:30.075476  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:30.075848  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:30.575440  103439 type.go:168] "Request Body" body=""
	I1002 20:50:30.575513  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:30.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:31.075504  103439 type.go:168] "Request Body" body=""
	I1002 20:50:31.075583  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:31.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:31.575533  103439 type.go:168] "Request Body" body=""
	I1002 20:50:31.575614  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:31.576035  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:32.075585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:32.075666  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:32.076026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:32.076094  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:32.575632  103439 type.go:168] "Request Body" body=""
	I1002 20:50:32.575709  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:32.576117  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:33.075652  103439 type.go:168] "Request Body" body=""
	I1002 20:50:33.075731  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:33.076100  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:33.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:50:33.575758  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:33.576149  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:34.075715  103439 type.go:168] "Request Body" body=""
	I1002 20:50:34.075810  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:34.076153  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:34.076216  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:34.575779  103439 type.go:168] "Request Body" body=""
	I1002 20:50:34.575858  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:34.576247  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:35.076148  103439 type.go:168] "Request Body" body=""
	I1002 20:50:35.076233  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:35.076598  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:35.576262  103439 type.go:168] "Request Body" body=""
	I1002 20:50:35.576347  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:35.576802  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:36.075374  103439 type.go:168] "Request Body" body=""
	I1002 20:50:36.075454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:36.075824  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:36.575422  103439 type.go:168] "Request Body" body=""
	I1002 20:50:36.575496  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:36.575848  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:36.575906  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:37.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:50:37.075521  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:37.075904  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:37.575460  103439 type.go:168] "Request Body" body=""
	I1002 20:50:37.575565  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:37.575952  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:38.075497  103439 type.go:168] "Request Body" body=""
	I1002 20:50:38.075579  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:38.075949  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:38.575843  103439 type.go:168] "Request Body" body=""
	I1002 20:50:38.575923  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:38.576292  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:38.576357  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:39.075970  103439 type.go:168] "Request Body" body=""
	I1002 20:50:39.076045  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:39.076459  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:39.576183  103439 type.go:168] "Request Body" body=""
	I1002 20:50:39.576276  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:39.576637  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:40.075394  103439 type.go:168] "Request Body" body=""
	I1002 20:50:40.075469  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:40.075856  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:40.575390  103439 type.go:168] "Request Body" body=""
	I1002 20:50:40.575465  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:40.575823  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:41.076191  103439 type.go:168] "Request Body" body=""
	I1002 20:50:41.076274  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:41.076628  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:41.076694  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:41.576291  103439 type.go:168] "Request Body" body=""
	I1002 20:50:41.576370  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:41.576770  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:42.076380  103439 type.go:168] "Request Body" body=""
	I1002 20:50:42.076481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:42.076834  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:42.575420  103439 type.go:168] "Request Body" body=""
	I1002 20:50:42.575496  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:42.575951  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:43.075513  103439 type.go:168] "Request Body" body=""
	I1002 20:50:43.075604  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:43.075967  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:43.575585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:43.575664  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:43.576070  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:43.576146  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:44.075681  103439 type.go:168] "Request Body" body=""
	I1002 20:50:44.075873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:44.076261  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:44.575868  103439 type.go:168] "Request Body" body=""
	I1002 20:50:44.575964  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:44.576327  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:45.076248  103439 type.go:168] "Request Body" body=""
	I1002 20:50:45.076357  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:45.076714  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:45.576035  103439 type.go:168] "Request Body" body=""
	I1002 20:50:45.576124  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:45.576501  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:45.576565  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:46.076153  103439 type.go:168] "Request Body" body=""
	I1002 20:50:46.076231  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:46.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:46.576261  103439 type.go:168] "Request Body" body=""
	I1002 20:50:46.576334  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:46.576706  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:47.076362  103439 type.go:168] "Request Body" body=""
	I1002 20:50:47.076446  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:47.076819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:47.575401  103439 type.go:168] "Request Body" body=""
	I1002 20:50:47.575474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:47.575854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:48.075429  103439 type.go:168] "Request Body" body=""
	I1002 20:50:48.075510  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:48.075856  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:48.075914  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:48.575411  103439 type.go:168] "Request Body" body=""
	I1002 20:50:48.575495  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:48.575887  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:49.075463  103439 type.go:168] "Request Body" body=""
	I1002 20:50:49.075543  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:49.075937  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:49.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:50:49.575579  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:49.575950  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:50.075789  103439 type.go:168] "Request Body" body=""
	I1002 20:50:50.075872  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:50.076231  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:50.076332  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:50.575815  103439 type.go:168] "Request Body" body=""
	I1002 20:50:50.575914  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:50.576296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:51.075877  103439 type.go:168] "Request Body" body=""
	I1002 20:50:51.075952  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:51.076337  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:51.576100  103439 type.go:168] "Request Body" body=""
	I1002 20:50:51.576202  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:51.576539  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:52.076187  103439 type.go:168] "Request Body" body=""
	I1002 20:50:52.076262  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:52.076592  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:52.076677  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:52.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:50:52.576403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:52.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:53.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:50:53.075460  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:53.075819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:53.575411  103439 type.go:168] "Request Body" body=""
	I1002 20:50:53.575520  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:53.575927  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:54.075511  103439 type.go:168] "Request Body" body=""
	I1002 20:50:54.075600  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:54.075971  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:54.575550  103439 type.go:168] "Request Body" body=""
	I1002 20:50:54.575643  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:54.576052  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:54.576136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:55.075833  103439 type.go:168] "Request Body" body=""
	I1002 20:50:55.075908  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:55.076313  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:55.575945  103439 type.go:168] "Request Body" body=""
	I1002 20:50:55.576033  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:55.576428  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:56.076124  103439 type.go:168] "Request Body" body=""
	I1002 20:50:56.076205  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:56.076588  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:56.576221  103439 type.go:168] "Request Body" body=""
	I1002 20:50:56.576325  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:56.576662  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:56.576724  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:57.076306  103439 type.go:168] "Request Body" body=""
	I1002 20:50:57.076386  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:57.076786  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:57.575325  103439 type.go:168] "Request Body" body=""
	I1002 20:50:57.575412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:57.575787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:58.076352  103439 type.go:168] "Request Body" body=""
	I1002 20:50:58.076422  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:58.076854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:58.575806  103439 type.go:168] "Request Body" body=""
	I1002 20:50:58.575901  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:58.576260  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:59.075853  103439 type.go:168] "Request Body" body=""
	I1002 20:50:59.075934  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:59.076321  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:59.076383  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:59.575967  103439 type.go:168] "Request Body" body=""
	I1002 20:50:59.576070  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:59.576437  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:00.076247  103439 type.go:168] "Request Body" body=""
	I1002 20:51:00.076327  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:00.076671  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:00.576348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:00.576435  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:00.576826  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:01.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:51:01.075456  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:01.075840  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:01.575383  103439 type.go:168] "Request Body" body=""
	I1002 20:51:01.575471  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:01.575834  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:01.575909  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:02.075405  103439 type.go:168] "Request Body" body=""
	I1002 20:51:02.075486  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:02.075854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:02.575427  103439 type.go:168] "Request Body" body=""
	I1002 20:51:02.575517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:02.575932  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:03.075458  103439 type.go:168] "Request Body" body=""
	I1002 20:51:03.075534  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:03.075891  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:03.576314  103439 type.go:168] "Request Body" body=""
	I1002 20:51:03.576387  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:03.576727  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:03.576806  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:04.076341  103439 type.go:168] "Request Body" body=""
	I1002 20:51:04.076414  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:04.076789  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:04.575407  103439 type.go:168] "Request Body" body=""
	I1002 20:51:04.575488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:04.575830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:05.075787  103439 type.go:168] "Request Body" body=""
	I1002 20:51:05.075860  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:05.076258  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:05.575847  103439 type.go:168] "Request Body" body=""
	I1002 20:51:05.575921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:05.576283  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:06.075890  103439 type.go:168] "Request Body" body=""
	I1002 20:51:06.075964  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:06.076395  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:06.076456  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:06.575993  103439 type.go:168] "Request Body" body=""
	I1002 20:51:06.576075  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:06.576412  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:07.076071  103439 type.go:168] "Request Body" body=""
	I1002 20:51:07.076154  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:07.076593  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:07.576229  103439 type.go:168] "Request Body" body=""
	I1002 20:51:07.576309  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:07.576657  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:08.076385  103439 type.go:168] "Request Body" body=""
	I1002 20:51:08.076464  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:08.076893  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:08.076954  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:08.575699  103439 type.go:168] "Request Body" body=""
	I1002 20:51:08.575787  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:08.576128  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:09.075675  103439 type.go:168] "Request Body" body=""
	I1002 20:51:09.075764  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:09.076126  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:09.576325  103439 type.go:168] "Request Body" body=""
	I1002 20:51:09.576432  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:09.576808  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:10.075645  103439 type.go:168] "Request Body" body=""
	I1002 20:51:10.075730  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:10.076142  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:10.575721  103439 type.go:168] "Request Body" body=""
	I1002 20:51:10.575820  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:10.576241  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:10.576304  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:11.075870  103439 type.go:168] "Request Body" body=""
	I1002 20:51:11.075955  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:11.076373  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:11.576041  103439 type.go:168] "Request Body" body=""
	I1002 20:51:11.576140  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:11.576505  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:12.076251  103439 type.go:168] "Request Body" body=""
	I1002 20:51:12.076345  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:12.076705  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:12.576352  103439 type.go:168] "Request Body" body=""
	I1002 20:51:12.576428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:12.576813  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:12.576892  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:13.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:51:13.075526  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:13.075917  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:13.575550  103439 type.go:168] "Request Body" body=""
	I1002 20:51:13.575640  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:13.576048  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:14.075644  103439 type.go:168] "Request Body" body=""
	I1002 20:51:14.075715  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:14.076108  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:14.575664  103439 type.go:168] "Request Body" body=""
	I1002 20:51:14.575795  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:14.576210  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:15.076065  103439 type.go:168] "Request Body" body=""
	I1002 20:51:15.076151  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:15.076548  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:15.076609  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:15.576209  103439 type.go:168] "Request Body" body=""
	I1002 20:51:15.576290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:15.576658  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:16.076387  103439 type.go:168] "Request Body" body=""
	I1002 20:51:16.076472  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:16.076818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:16.575432  103439 type.go:168] "Request Body" body=""
	I1002 20:51:16.575509  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:16.575925  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:17.075499  103439 type.go:168] "Request Body" body=""
	I1002 20:51:17.075588  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:17.075953  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:17.575636  103439 type.go:168] "Request Body" body=""
	I1002 20:51:17.575717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:17.576139  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:17.576206  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:18.075726  103439 type.go:168] "Request Body" body=""
	I1002 20:51:18.075840  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:18.076170  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:18.576043  103439 type.go:168] "Request Body" body=""
	I1002 20:51:18.576134  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:18.576500  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:19.076156  103439 type.go:168] "Request Body" body=""
	I1002 20:51:19.076230  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:19.076608  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:19.576287  103439 type.go:168] "Request Body" body=""
	I1002 20:51:19.576370  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:19.576719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:19.576823  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:20.075605  103439 type.go:168] "Request Body" body=""
	I1002 20:51:20.075689  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:20.076064  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:20.575671  103439 type.go:168] "Request Body" body=""
	I1002 20:51:20.575771  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:20.576160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:21.075760  103439 type.go:168] "Request Body" body=""
	I1002 20:51:21.075844  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:21.076251  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:21.575856  103439 type.go:168] "Request Body" body=""
	I1002 20:51:21.575946  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:21.576277  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:22.075938  103439 type.go:168] "Request Body" body=""
	I1002 20:51:22.076020  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:22.076385  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:22.076458  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:22.576058  103439 type.go:168] "Request Body" body=""
	I1002 20:51:22.576150  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:22.576496  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:23.076164  103439 type.go:168] "Request Body" body=""
	I1002 20:51:23.076256  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:23.076616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:23.576268  103439 type.go:168] "Request Body" body=""
	I1002 20:51:23.576350  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:23.576704  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:24.076361  103439 type.go:168] "Request Body" body=""
	I1002 20:51:24.076448  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:24.076818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:24.076882  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:24.575376  103439 type.go:168] "Request Body" body=""
	I1002 20:51:24.575452  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:24.575842  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:25.075817  103439 type.go:168] "Request Body" body=""
	I1002 20:51:25.075926  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:25.076324  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:25.575895  103439 type.go:168] "Request Body" body=""
	I1002 20:51:25.575977  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:25.576326  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:26.076018  103439 type.go:168] "Request Body" body=""
	I1002 20:51:26.076112  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:26.076484  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:26.576139  103439 type.go:168] "Request Body" body=""
	I1002 20:51:26.576216  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:26.576529  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:26.576601  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:27.076219  103439 type.go:168] "Request Body" body=""
	I1002 20:51:27.076333  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:27.076702  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:27.576348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:27.576421  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:27.576775  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:28.075392  103439 type.go:168] "Request Body" body=""
	I1002 20:51:28.075490  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:28.075928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:28.575733  103439 type.go:168] "Request Body" body=""
	I1002 20:51:28.575828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:28.576180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:29.075796  103439 type.go:168] "Request Body" body=""
	I1002 20:51:29.075881  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:29.076267  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:29.076325  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:29.575904  103439 type.go:168] "Request Body" body=""
	I1002 20:51:29.575995  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:29.576458  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:30.076348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:30.076430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:30.076826  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:30.575400  103439 type.go:168] "Request Body" body=""
	I1002 20:51:30.575481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:30.575844  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:31.075477  103439 type.go:168] "Request Body" body=""
	I1002 20:51:31.075558  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:31.076018  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:31.575552  103439 type.go:168] "Request Body" body=""
	I1002 20:51:31.575626  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:31.575957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:31.576019  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:32.075567  103439 type.go:168] "Request Body" body=""
	I1002 20:51:32.075648  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:32.076000  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:32.575617  103439 type.go:168] "Request Body" body=""
	I1002 20:51:32.575691  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:32.576091  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:33.075777  103439 type.go:168] "Request Body" body=""
	I1002 20:51:33.075867  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:33.076312  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:33.575892  103439 type.go:168] "Request Body" body=""
	I1002 20:51:33.575966  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:33.576360  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:33.576436  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:34.075990  103439 type.go:168] "Request Body" body=""
	I1002 20:51:34.076064  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:34.076423  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:34.576156  103439 type.go:168] "Request Body" body=""
	I1002 20:51:34.576242  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:34.576614  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:35.075451  103439 type.go:168] "Request Body" body=""
	I1002 20:51:35.075544  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:35.075944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:35.575553  103439 type.go:168] "Request Body" body=""
	I1002 20:51:35.575632  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:35.575984  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:36.075611  103439 type.go:168] "Request Body" body=""
	I1002 20:51:36.075690  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:36.076097  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:36.076170  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:36.575781  103439 type.go:168] "Request Body" body=""
	I1002 20:51:36.575857  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:36.576209  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:37.075787  103439 type.go:168] "Request Body" body=""
	I1002 20:51:37.075868  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:37.076233  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:37.575919  103439 type.go:168] "Request Body" body=""
	I1002 20:51:37.576016  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:37.576386  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:38.076037  103439 type.go:168] "Request Body" body=""
	I1002 20:51:38.076126  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:38.076506  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:38.076573  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:38.576216  103439 type.go:168] "Request Body" body=""
	I1002 20:51:38.576315  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:38.576715  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:39.076566  103439 type.go:168] "Request Body" body=""
	I1002 20:51:39.076671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:39.077118  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:39.575701  103439 type.go:168] "Request Body" body=""
	I1002 20:51:39.575832  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:39.576184  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:40.076137  103439 type.go:168] "Request Body" body=""
	I1002 20:51:40.076214  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:40.076550  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:40.076615  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:40.576291  103439 type.go:168] "Request Body" body=""
	I1002 20:51:40.576390  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:40.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:41.075322  103439 type.go:168] "Request Body" body=""
	I1002 20:51:41.075403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:41.075780  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:41.575391  103439 type.go:168] "Request Body" body=""
	I1002 20:51:41.575470  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:41.575870  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:42.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:51:42.075545  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:42.075943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:42.575565  103439 type.go:168] "Request Body" body=""
	I1002 20:51:42.575660  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:42.576053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:42.576127  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:43.075648  103439 type.go:168] "Request Body" body=""
	I1002 20:51:43.075718  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:43.076099  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:43.575699  103439 type.go:168] "Request Body" body=""
	I1002 20:51:43.575814  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:43.576217  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:44.075869  103439 type.go:168] "Request Body" body=""
	I1002 20:51:44.075942  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:44.076297  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:44.575859  103439 type.go:168] "Request Body" body=""
	I1002 20:51:44.575949  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:44.576319  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:44.576388  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:45.076331  103439 type.go:168] "Request Body" body=""
	I1002 20:51:45.076413  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:45.076728  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:45.575369  103439 type.go:168] "Request Body" body=""
	I1002 20:51:45.575463  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:45.575833  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:46.075482  103439 type.go:168] "Request Body" body=""
	I1002 20:51:46.075561  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:46.075954  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:46.575542  103439 type.go:168] "Request Body" body=""
	I1002 20:51:46.575624  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:46.575972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:47.075530  103439 type.go:168] "Request Body" body=""
	I1002 20:51:47.075605  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:47.076010  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:47.076101  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:47.575610  103439 type.go:168] "Request Body" body=""
	I1002 20:51:47.575685  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:47.576069  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:48.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:51:48.075809  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:48.076160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:48.576035  103439 type.go:168] "Request Body" body=""
	I1002 20:51:48.576123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:48.576499  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:49.076190  103439 type.go:168] "Request Body" body=""
	I1002 20:51:49.076263  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:49.076621  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:49.076681  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:49.576270  103439 type.go:168] "Request Body" body=""
	I1002 20:51:49.576351  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:49.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:50.075539  103439 type.go:168] "Request Body" body=""
	I1002 20:51:50.075624  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:50.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:50.575631  103439 type.go:168] "Request Body" body=""
	I1002 20:51:50.575707  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:50.576114  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:51.075711  103439 type.go:168] "Request Body" body=""
	I1002 20:51:51.075818  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:51.076157  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:51.575814  103439 type.go:168] "Request Body" body=""
	I1002 20:51:51.575890  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:51.576235  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:51.576316  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:52.075820  103439 type.go:168] "Request Body" body=""
	I1002 20:51:52.075911  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:52.076272  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:52.575858  103439 type.go:168] "Request Body" body=""
	I1002 20:51:52.575932  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:52.576284  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:53.075878  103439 type.go:168] "Request Body" body=""
	I1002 20:51:53.075963  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:53.076342  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:53.576038  103439 type.go:168] "Request Body" body=""
	I1002 20:51:53.576123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:53.576491  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:53.576559  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:54.076212  103439 type.go:168] "Request Body" body=""
	I1002 20:51:54.076289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:54.076627  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:54.576310  103439 type.go:168] "Request Body" body=""
	I1002 20:51:54.576389  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:54.576719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:55.075503  103439 type.go:168] "Request Body" body=""
	I1002 20:51:55.075581  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:55.075972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:55.575557  103439 type.go:168] "Request Body" body=""
	I1002 20:51:55.575642  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:55.576018  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:56.075601  103439 type.go:168] "Request Body" body=""
	I1002 20:51:56.075683  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:56.076064  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:56.076141  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:56.575721  103439 type.go:168] "Request Body" body=""
	I1002 20:51:56.575815  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:56.576144  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:57.075712  103439 type.go:168] "Request Body" body=""
	I1002 20:51:57.075821  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:57.076181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:57.575767  103439 type.go:168] "Request Body" body=""
	I1002 20:51:57.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:57.576216  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:58.075841  103439 type.go:168] "Request Body" body=""
	I1002 20:51:58.075920  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:58.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:58.076367  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:58.576187  103439 type.go:168] "Request Body" body=""
	I1002 20:51:58.576265  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:58.576613  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:59.076311  103439 type.go:168] "Request Body" body=""
	I1002 20:51:59.076391  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:59.076790  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:59.576375  103439 type.go:168] "Request Body" body=""
	I1002 20:51:59.576454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:59.576812  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:00.075544  103439 type.go:168] "Request Body" body=""
	I1002 20:52:00.075629  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:00.075981  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:00.575537  103439 type.go:168] "Request Body" body=""
	I1002 20:52:00.575633  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:00.576003  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:00.576089  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:01.075618  103439 type.go:168] "Request Body" body=""
	I1002 20:52:01.075698  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:01.076058  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:01.575676  103439 type.go:168] "Request Body" body=""
	I1002 20:52:01.575782  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:01.576133  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:02.075714  103439 type.go:168] "Request Body" body=""
	I1002 20:52:02.075815  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:02.076186  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:02.575783  103439 type.go:168] "Request Body" body=""
	I1002 20:52:02.575871  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:02.576224  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:02.576299  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:03.075796  103439 type.go:168] "Request Body" body=""
	I1002 20:52:03.075881  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:03.076235  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:03.575826  103439 type.go:168] "Request Body" body=""
	I1002 20:52:03.575903  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:03.576282  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:04.075959  103439 type.go:168] "Request Body" body=""
	I1002 20:52:04.076039  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:04.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:04.576109  103439 type.go:168] "Request Body" body=""
	I1002 20:52:04.576183  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:04.576520  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:04.576584  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:05.075455  103439 type.go:168] "Request Body" body=""
	I1002 20:52:05.075532  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:05.075890  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:05.575433  103439 type.go:168] "Request Body" body=""
	I1002 20:52:05.575505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:05.575871  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:06.075440  103439 type.go:168] "Request Body" body=""
	I1002 20:52:06.075523  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:06.075827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:06.575497  103439 type.go:168] "Request Body" body=""
	I1002 20:52:06.575590  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:06.576026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:07.075591  103439 type.go:168] "Request Body" body=""
	I1002 20:52:07.075672  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:07.076053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:07.076126  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:07.575663  103439 type.go:168] "Request Body" body=""
	I1002 20:52:07.575766  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:07.576128  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:08.075654  103439 type.go:168] "Request Body" body=""
	I1002 20:52:08.075729  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:08.076096  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:08.575925  103439 type.go:168] "Request Body" body=""
	I1002 20:52:08.576003  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:08.576346  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:09.076056  103439 type.go:168] "Request Body" body=""
	I1002 20:52:09.076147  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:09.076530  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:09.076595  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:09.576165  103439 type.go:168] "Request Body" body=""
	I1002 20:52:09.576244  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:09.576584  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:10.075437  103439 type.go:168] "Request Body" body=""
	I1002 20:52:10.075510  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:10.075873  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:10.575468  103439 type.go:168] "Request Body" body=""
	I1002 20:52:10.575558  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:10.575906  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:11.075492  103439 type.go:168] "Request Body" body=""
	I1002 20:52:11.075568  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:11.075940  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:11.575529  103439 type.go:168] "Request Body" body=""
	I1002 20:52:11.575621  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:11.575986  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:11.576046  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:12.075605  103439 type.go:168] "Request Body" body=""
	I1002 20:52:12.075682  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:12.076073  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:12.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:52:12.575763  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:12.576125  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:13.075722  103439 type.go:168] "Request Body" body=""
	I1002 20:52:13.075828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:13.076171  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:13.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:52:13.575836  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:13.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:13.576254  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:14.075831  103439 type.go:168] "Request Body" body=""
	I1002 20:52:14.075921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:14.076324  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:14.575966  103439 type.go:168] "Request Body" body=""
	I1002 20:52:14.576045  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:14.576396  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:15.076397  103439 type.go:168] "Request Body" body=""
	I1002 20:52:15.076484  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:15.076845  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:15.575989  103439 type.go:168] "Request Body" body=""
	I1002 20:52:15.576066  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:15.576461  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:15.576526  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:16.076140  103439 type.go:168] "Request Body" body=""
	I1002 20:52:16.076235  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:16.076620  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:16.576345  103439 type.go:168] "Request Body" body=""
	I1002 20:52:16.576420  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:16.576818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:17.075412  103439 type.go:168] "Request Body" body=""
	I1002 20:52:17.075504  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:17.075868  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:17.575510  103439 type.go:168] "Request Body" body=""
	I1002 20:52:17.575592  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:17.575975  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:18.075585  103439 type.go:168] "Request Body" body=""
	I1002 20:52:18.075665  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:18.076061  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:18.076136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:18.575985  103439 type.go:168] "Request Body" body=""
	I1002 20:52:18.576059  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:18.576415  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:19.076058  103439 type.go:168] "Request Body" body=""
	I1002 20:52:19.076159  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:19.076526  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:19.576216  103439 type.go:168] "Request Body" body=""
	I1002 20:52:19.576306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:19.576656  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:20.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:20.075668  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:20.076037  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:20.575615  103439 type.go:168] "Request Body" body=""
	I1002 20:52:20.575692  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:20.576056  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:20.576123  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:21.075653  103439 type.go:168] "Request Body" body=""
	I1002 20:52:21.075760  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:21.076104  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:21.575691  103439 type.go:168] "Request Body" body=""
	I1002 20:52:21.575787  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:21.576159  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:22.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:52:22.075808  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:22.076168  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:22.575725  103439 type.go:168] "Request Body" body=""
	I1002 20:52:22.575823  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:22.576174  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:22.576239  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:23.075794  103439 type.go:168] "Request Body" body=""
	I1002 20:52:23.075868  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:23.076225  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:23.575463  103439 type.go:168] "Request Body" body=""
	I1002 20:52:23.575550  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:23.575980  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:24.075592  103439 type.go:168] "Request Body" body=""
	I1002 20:52:24.075681  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:24.076032  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:24.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:52:24.575768  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:24.576132  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:25.075932  103439 type.go:168] "Request Body" body=""
	I1002 20:52:25.076017  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:25.076379  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:25.076450  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:25.576068  103439 type.go:168] "Request Body" body=""
	I1002 20:52:25.576165  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:25.576567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:26.076267  103439 type.go:168] "Request Body" body=""
	I1002 20:52:26.076346  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:26.076713  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:26.576395  103439 type.go:168] "Request Body" body=""
	I1002 20:52:26.576472  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:26.576858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:27.075411  103439 type.go:168] "Request Body" body=""
	I1002 20:52:27.075491  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:27.075850  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:27.575491  103439 type.go:168] "Request Body" body=""
	I1002 20:52:27.575573  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:27.575964  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:27.576028  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:28.075504  103439 type.go:168] "Request Body" body=""
	I1002 20:52:28.075596  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:28.075950  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:28.575839  103439 type.go:168] "Request Body" body=""
	I1002 20:52:28.576029  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:28.576476  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:29.075757  103439 type.go:168] "Request Body" body=""
	I1002 20:52:29.075848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:29.076242  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:29.575836  103439 type.go:168] "Request Body" body=""
	I1002 20:52:29.575917  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:29.576348  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:29.576430  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:30.076283  103439 type.go:168] "Request Body" body=""
	I1002 20:52:30.076376  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:30.076774  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:30.575345  103439 type.go:168] "Request Body" body=""
	I1002 20:52:30.575422  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:30.575772  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:31.075417  103439 type.go:168] "Request Body" body=""
	I1002 20:52:31.075490  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:31.075917  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:31.575405  103439 type.go:168] "Request Body" body=""
	I1002 20:52:31.575482  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:31.575879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:32.075416  103439 type.go:168] "Request Body" body=""
	I1002 20:52:32.075492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:32.075830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:32.075891  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:32.575384  103439 type.go:168] "Request Body" body=""
	I1002 20:52:32.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:32.575860  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:33.075424  103439 type.go:168] "Request Body" body=""
	I1002 20:52:33.075505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:33.075919  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:33.575575  103439 type.go:168] "Request Body" body=""
	I1002 20:52:33.575659  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:33.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:34.075603  103439 type.go:168] "Request Body" body=""
	I1002 20:52:34.075689  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:34.076059  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:34.076133  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:34.575643  103439 type.go:168] "Request Body" body=""
	I1002 20:52:34.575717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:34.576097  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:35.075919  103439 type.go:168] "Request Body" body=""
	I1002 20:52:35.076001  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:35.076401  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:35.576097  103439 type.go:168] "Request Body" body=""
	I1002 20:52:35.576190  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:35.576569  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:36.076242  103439 type.go:168] "Request Body" body=""
	I1002 20:52:36.076321  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:36.076684  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:36.076771  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:36.576350  103439 type.go:168] "Request Body" body=""
	I1002 20:52:36.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:36.576806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:37.075371  103439 type.go:168] "Request Body" body=""
	I1002 20:52:37.075445  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:37.075830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:37.575379  103439 type.go:168] "Request Body" body=""
	I1002 20:52:37.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:37.575827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:38.075420  103439 type.go:168] "Request Body" body=""
	I1002 20:52:38.075494  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:38.075864  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:38.575408  103439 type.go:168] "Request Body" body=""
	I1002 20:52:38.575505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:38.575831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:38.575904  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:39.075468  103439 type.go:168] "Request Body" body=""
	I1002 20:52:39.075555  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:39.075908  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:39.575486  103439 type.go:168] "Request Body" body=""
	I1002 20:52:39.575564  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:39.575943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:40.075840  103439 type.go:168] "Request Body" body=""
	I1002 20:52:40.075937  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:40.076335  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:40.576013  103439 type.go:168] "Request Body" body=""
	I1002 20:52:40.576104  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:40.576440  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:40.576500  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:41.076194  103439 type.go:168] "Request Body" body=""
	I1002 20:52:41.076306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:41.076712  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:41.575323  103439 type.go:168] "Request Body" body=""
	I1002 20:52:41.575412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:41.575799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:42.075383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:42.075484  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:42.075843  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:42.575392  103439 type.go:168] "Request Body" body=""
	I1002 20:52:42.575469  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:42.575828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:43.075519  103439 type.go:168] "Request Body" body=""
	I1002 20:52:43.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:43.076045  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:43.076121  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:43.575640  103439 type.go:168] "Request Body" body=""
	I1002 20:52:43.575711  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:43.576105  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:44.075717  103439 type.go:168] "Request Body" body=""
	I1002 20:52:44.075847  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:44.076211  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:44.575828  103439 type.go:168] "Request Body" body=""
	I1002 20:52:44.575911  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:44.576256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:45.076131  103439 type.go:168] "Request Body" body=""
	I1002 20:52:45.076212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:45.076558  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:45.076640  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:45.576225  103439 type.go:168] "Request Body" body=""
	I1002 20:52:45.576305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:45.576652  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:46.076299  103439 type.go:168] "Request Body" body=""
	I1002 20:52:46.076380  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:46.076766  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:46.575344  103439 type.go:168] "Request Body" body=""
	I1002 20:52:46.575417  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:46.575789  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:47.075373  103439 type.go:168] "Request Body" body=""
	I1002 20:52:47.075452  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:47.075833  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:47.575383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:47.575467  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:47.575823  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:47.575904  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:48.075383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:48.075461  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:48.075828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:48.575654  103439 type.go:168] "Request Body" body=""
	I1002 20:52:48.575753  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:48.576167  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:49.075788  103439 type.go:168] "Request Body" body=""
	I1002 20:52:49.075878  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:49.076256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:49.575841  103439 type.go:168] "Request Body" body=""
	I1002 20:52:49.575931  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:49.576281  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:49.576341  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:50.076152  103439 type.go:168] "Request Body" body=""
	I1002 20:52:50.076231  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:50.076577  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:50.576298  103439 type.go:168] "Request Body" body=""
	I1002 20:52:50.576372  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:50.576726  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:51.075356  103439 type.go:168] "Request Body" body=""
	I1002 20:52:51.075442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:51.075828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:51.575458  103439 type.go:168] "Request Body" body=""
	I1002 20:52:51.575551  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:51.575985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:52.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:52.075659  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:52.076041  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:52.076130  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:52.575624  103439 type.go:168] "Request Body" body=""
	I1002 20:52:52.575701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:52.576057  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:53.075653  103439 type.go:168] "Request Body" body=""
	I1002 20:52:53.075728  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:53.076123  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:53.575676  103439 type.go:168] "Request Body" body=""
	I1002 20:52:53.575779  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:53.576133  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:54.075709  103439 type.go:168] "Request Body" body=""
	I1002 20:52:54.075829  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:54.076213  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:54.076292  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:54.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:52:54.575875  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:54.576247  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:55.076140  103439 type.go:168] "Request Body" body=""
	I1002 20:52:55.076229  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:55.076568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:55.576341  103439 type.go:168] "Request Body" body=""
	I1002 20:52:55.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:55.576817  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:56.075357  103439 type.go:168] "Request Body" body=""
	I1002 20:52:56.075448  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:56.075831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:56.575413  103439 type.go:168] "Request Body" body=""
	I1002 20:52:56.575503  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:56.575861  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:56.575933  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:57.075427  103439 type.go:168] "Request Body" body=""
	I1002 20:52:57.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:57.076006  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:57.575579  103439 type.go:168] "Request Body" body=""
	I1002 20:52:57.575653  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:57.576016  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:58.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:58.075671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:58.076062  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:58.575986  103439 type.go:168] "Request Body" body=""
	I1002 20:52:58.576070  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:58.576405  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:58.576463  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:59.076072  103439 type.go:168] "Request Body" body=""
	I1002 20:52:59.076176  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:59.076539  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:59.576174  103439 type.go:168] "Request Body" body=""
	I1002 20:52:59.576247  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:59.576606  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:00.075451  103439 type.go:168] "Request Body" body=""
	I1002 20:53:00.075535  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:00.075944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:00.575527  103439 type.go:168] "Request Body" body=""
	I1002 20:53:00.575613  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:00.576021  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:01.075639  103439 type.go:168] "Request Body" body=""
	I1002 20:53:01.075720  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:01.076158  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:01.076236  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:01.575757  103439 type.go:168] "Request Body" body=""
	I1002 20:53:01.575840  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:01.576224  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:02.075855  103439 type.go:168] "Request Body" body=""
	I1002 20:53:02.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:02.076346  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:02.576050  103439 type.go:168] "Request Body" body=""
	I1002 20:53:02.576149  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:02.576502  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:03.076160  103439 type.go:168] "Request Body" body=""
	I1002 20:53:03.076234  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:03.076597  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:03.076676  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:03.575963  103439 type.go:168] "Request Body" body=""
	I1002 20:53:03.576036  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:03.576386  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:04.076077  103439 type.go:168] "Request Body" body=""
	I1002 20:53:04.076167  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:04.076509  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:04.576256  103439 type.go:168] "Request Body" body=""
	I1002 20:53:04.576341  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:04.576710  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:05.075500  103439 type.go:168] "Request Body" body=""
	I1002 20:53:05.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:05.076015  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:05.575620  103439 type.go:168] "Request Body" body=""
	I1002 20:53:05.575699  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:05.576053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:05.576126  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:06.075659  103439 type.go:168] "Request Body" body=""
	I1002 20:53:06.075778  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:06.076160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:06.575713  103439 type.go:168] "Request Body" body=""
	I1002 20:53:06.575808  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:06.576161  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:07.075791  103439 type.go:168] "Request Body" body=""
	I1002 20:53:07.075896  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:07.076278  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:07.575857  103439 type.go:168] "Request Body" body=""
	I1002 20:53:07.575932  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:07.576289  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:07.576361  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:08.075859  103439 type.go:168] "Request Body" body=""
	I1002 20:53:08.075955  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:08.076329  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:08.576047  103439 type.go:168] "Request Body" body=""
	I1002 20:53:08.576136  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:08.576492  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:09.076119  103439 type.go:168] "Request Body" body=""
	I1002 20:53:09.076215  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:09.076582  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:09.576306  103439 type.go:168] "Request Body" body=""
	I1002 20:53:09.576382  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:09.576707  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:09.576802  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:10.075438  103439 type.go:168] "Request Body" body=""
	I1002 20:53:10.075516  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:10.075948  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:10.575530  103439 type.go:168] "Request Body" body=""
	I1002 20:53:10.575609  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:10.575983  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:11.075661  103439 type.go:168] "Request Body" body=""
	I1002 20:53:11.075769  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:11.076130  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:11.575757  103439 type.go:168] "Request Body" body=""
	I1002 20:53:11.575830  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:11.576189  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:12.075811  103439 type.go:168] "Request Body" body=""
	I1002 20:53:12.075891  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:12.076252  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:12.076323  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:12.575823  103439 type.go:168] "Request Body" body=""
	I1002 20:53:12.575896  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:12.576250  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:13.075897  103439 type.go:168] "Request Body" body=""
	I1002 20:53:13.075987  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:13.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:13.576059  103439 type.go:168] "Request Body" body=""
	I1002 20:53:13.576149  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:13.576497  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:14.076230  103439 type.go:168] "Request Body" body=""
	I1002 20:53:14.076305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:14.076648  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:14.076724  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:14.576300  103439 type.go:168] "Request Body" body=""
	I1002 20:53:14.576375  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:14.576711  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:15.075457  103439 type.go:168] "Request Body" body=""
	I1002 20:53:15.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:15.075942  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:15.575476  103439 type.go:168] "Request Body" body=""
	I1002 20:53:15.575564  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:15.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:16.075498  103439 type.go:168] "Request Body" body=""
	I1002 20:53:16.075597  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:16.075974  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:16.575530  103439 type.go:168] "Request Body" body=""
	I1002 20:53:16.575607  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:16.575990  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:16.576057  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:17.075599  103439 type.go:168] "Request Body" body=""
	I1002 20:53:17.075683  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:17.076066  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:17.575633  103439 type.go:168] "Request Body" body=""
	I1002 20:53:17.575706  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:17.576088  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:18.075675  103439 type.go:168] "Request Body" body=""
	I1002 20:53:18.075775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:18.076143  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:18.575997  103439 type.go:168] "Request Body" body=""
	I1002 20:53:18.576068  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:18.576432  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:18.576492  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:19.076147  103439 type.go:168] "Request Body" body=""
	I1002 20:53:19.076228  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:19.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:19.576248  103439 type.go:168] "Request Body" body=""
	I1002 20:53:19.576332  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:19.576675  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:20.075447  103439 type.go:168] "Request Body" body=""
	I1002 20:53:20.075529  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:20.075898  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:20.575465  103439 type.go:168] "Request Body" body=""
	I1002 20:53:20.575538  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:20.575923  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:21.075521  103439 type.go:168] "Request Body" body=""
	I1002 20:53:21.075619  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:21.075978  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:21.076044  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:21.575665  103439 type.go:168] "Request Body" body=""
	I1002 20:53:21.575775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:21.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:22.075717  103439 type.go:168] "Request Body" body=""
	I1002 20:53:22.075828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:22.076183  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:22.575808  103439 type.go:168] "Request Body" body=""
	I1002 20:53:22.575897  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:22.576256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:23.075928  103439 type.go:168] "Request Body" body=""
	I1002 20:53:23.076009  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:23.076405  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:23.076478  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:23.576093  103439 type.go:168] "Request Body" body=""
	I1002 20:53:23.576168  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:23.576558  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:24.076203  103439 type.go:168] "Request Body" body=""
	I1002 20:53:24.076290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:24.076643  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:24.576321  103439 type.go:168] "Request Body" body=""
	I1002 20:53:24.576404  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:24.576814  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:25.075708  103439 type.go:168] "Request Body" body=""
	I1002 20:53:25.075822  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:25.076180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:25.575791  103439 type.go:168] "Request Body" body=""
	I1002 20:53:25.575873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:25.576263  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:25.576328  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:26.075894  103439 type.go:168] "Request Body" body=""
	I1002 20:53:26.075978  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:26.076323  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:26.576003  103439 type.go:168] "Request Body" body=""
	I1002 20:53:26.576076  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:26.576445  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:27.076142  103439 type.go:168] "Request Body" body=""
	I1002 20:53:27.076232  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:27.076600  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:27.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:53:27.576332  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:27.576701  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:27.576806  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:28.076370  103439 type.go:168] "Request Body" body=""
	I1002 20:53:28.076473  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:28.076858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:28.575697  103439 type.go:168] "Request Body" body=""
	I1002 20:53:28.575806  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:28.576163  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:29.075772  103439 type.go:168] "Request Body" body=""
	I1002 20:53:29.075851  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:29.076254  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:29.575812  103439 type.go:168] "Request Body" body=""
	I1002 20:53:29.575887  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:29.576260  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:30.076121  103439 type.go:168] "Request Body" body=""
	I1002 20:53:30.076195  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:30.076543  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:30.076603  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:30.576211  103439 type.go:168] "Request Body" body=""
	I1002 20:53:30.576293  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:30.576650  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:31.076346  103439 type.go:168] "Request Body" body=""
	I1002 20:53:31.076423  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:31.076802  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:31.575356  103439 type.go:168] "Request Body" body=""
	I1002 20:53:31.575434  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:31.575808  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:32.075359  103439 type.go:168] "Request Body" body=""
	I1002 20:53:32.075437  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:32.075799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:32.575336  103439 type.go:168] "Request Body" body=""
	I1002 20:53:32.575410  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:32.575777  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:32.575837  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:33.075392  103439 type.go:168] "Request Body" body=""
	I1002 20:53:33.075475  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:33.075865  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:33.575440  103439 type.go:168] "Request Body" body=""
	I1002 20:53:33.575517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:33.575846  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:34.075534  103439 type.go:168] "Request Body" body=""
	I1002 20:53:34.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:34.075996  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:34.575566  103439 type.go:168] "Request Body" body=""
	I1002 20:53:34.575655  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:34.576020  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:34.576093  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:35.075839  103439 type.go:168] "Request Body" body=""
	I1002 20:53:35.075921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:35.076292  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:35.575879  103439 type.go:168] "Request Body" body=""
	I1002 20:53:35.575953  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:35.576311  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:36.075998  103439 type.go:168] "Request Body" body=""
	I1002 20:53:36.076095  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:36.076469  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:36.576150  103439 type.go:168] "Request Body" body=""
	I1002 20:53:36.576229  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:36.576577  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:36.576639  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:37.076335  103439 type.go:168] "Request Body" body=""
	I1002 20:53:37.076417  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:37.076801  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:37.575377  103439 type.go:168] "Request Body" body=""
	I1002 20:53:37.575453  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:37.575879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:38.075474  103439 type.go:168] "Request Body" body=""
	I1002 20:53:38.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:38.075957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:38.575859  103439 type.go:168] "Request Body" body=""
	I1002 20:53:38.575935  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:38.576296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:39.076017  103439 type.go:168] "Request Body" body=""
	I1002 20:53:39.076111  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:39.076475  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:39.076596  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:39.576181  103439 type.go:168] "Request Body" body=""
	I1002 20:53:39.576257  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:39.576614  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:40.075456  103439 type.go:168] "Request Body" body=""
	I1002 20:53:40.075533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:40.075956  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:40.575509  103439 type.go:168] "Request Body" body=""
	I1002 20:53:40.575586  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:40.575951  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:41.075524  103439 type.go:168] "Request Body" body=""
	I1002 20:53:41.075607  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:41.075983  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:41.575591  103439 type.go:168] "Request Body" body=""
	I1002 20:53:41.575678  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:41.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:41.576118  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:42.075648  103439 type.go:168] "Request Body" body=""
	I1002 20:53:42.075731  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:42.076108  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:42.575677  103439 type.go:168] "Request Body" body=""
	I1002 20:53:42.575790  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:42.576150  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:43.075731  103439 type.go:168] "Request Body" body=""
	I1002 20:53:43.075831  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:43.076198  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:43.575889  103439 type.go:168] "Request Body" body=""
	I1002 20:53:43.575972  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:43.576366  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:43.576426  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:44.075602  103439 type.go:168] "Request Body" body=""
	I1002 20:53:44.075701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:44.076125  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:44.575700  103439 type.go:168] "Request Body" body=""
	I1002 20:53:44.575816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:44.576238  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:45.076167  103439 type.go:168] "Request Body" body=""
	I1002 20:53:45.076247  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:45.076676  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:45.576379  103439 type.go:168] "Request Body" body=""
	I1002 20:53:45.576462  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:45.576855  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:45.576932  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:46.075425  103439 type.go:168] "Request Body" body=""
	I1002 20:53:46.075515  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:46.075882  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:46.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:53:46.575563  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:46.575944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:47.075576  103439 type.go:168] "Request Body" body=""
	I1002 20:53:47.075649  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:47.076028  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:47.575645  103439 type.go:168] "Request Body" body=""
	I1002 20:53:47.575724  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:47.576173  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:48.075842  103439 type.go:168] "Request Body" body=""
	I1002 20:53:48.075922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:48.076288  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:48.076360  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:48.576176  103439 type.go:168] "Request Body" body=""
	I1002 20:53:48.576259  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:48.576606  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:49.076289  103439 type.go:168] "Request Body" body=""
	I1002 20:53:49.076364  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:49.076718  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:49.575397  103439 type.go:168] "Request Body" body=""
	I1002 20:53:49.575476  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:49.575864  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:50.075484  103439 type.go:168] "Request Body" body=""
	I1002 20:53:50.075575  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:50.075985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:50.575634  103439 type.go:168] "Request Body" body=""
	I1002 20:53:50.575725  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:50.576140  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:50.576223  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:51.075766  103439 type.go:168] "Request Body" body=""
	I1002 20:53:51.075855  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:51.076251  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:51.575845  103439 type.go:168] "Request Body" body=""
	I1002 20:53:51.575936  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:51.576310  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:52.076007  103439 type.go:168] "Request Body" body=""
	I1002 20:53:52.076100  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:52.076512  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:52.576200  103439 type.go:168] "Request Body" body=""
	I1002 20:53:52.576311  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:52.576659  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:52.576723  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:53.076346  103439 type.go:168] "Request Body" body=""
	I1002 20:53:53.076426  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:53.076819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:53.575357  103439 type.go:168] "Request Body" body=""
	I1002 20:53:53.575435  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:53.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:54.075408  103439 type.go:168] "Request Body" body=""
	I1002 20:53:54.075485  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:54.075889  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:54.575457  103439 type.go:168] "Request Body" body=""
	I1002 20:53:54.575534  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:54.575882  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:55.075838  103439 type.go:168] "Request Body" body=""
	I1002 20:53:55.075915  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:55.076266  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:55.076327  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:55.575878  103439 type.go:168] "Request Body" body=""
	I1002 20:53:55.575957  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:55.576307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:56.075931  103439 type.go:168] "Request Body" body=""
	I1002 20:53:56.076017  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:56.076382  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:56.576046  103439 type.go:168] "Request Body" body=""
	I1002 20:53:56.576133  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:56.576476  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:57.076106  103439 type.go:168] "Request Body" body=""
	I1002 20:53:57.076183  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:57.076505  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:57.076565  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:57.576226  103439 type.go:168] "Request Body" body=""
	I1002 20:53:57.576298  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:57.576629  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:58.076297  103439 type.go:168] "Request Body" body=""
	I1002 20:53:58.076394  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:58.076731  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:58.575639  103439 type.go:168] "Request Body" body=""
	I1002 20:53:58.575725  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:58.576105  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:59.075691  103439 type.go:168] "Request Body" body=""
	I1002 20:53:59.075862  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:59.076223  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:59.575805  103439 type.go:168] "Request Body" body=""
	I1002 20:53:59.575887  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:59.576267  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:59.576342  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:00.076234  103439 type.go:168] "Request Body" body=""
	I1002 20:54:00.076318  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:00.076665  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:00.576298  103439 type.go:168] "Request Body" body=""
	I1002 20:54:00.576374  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:00.576723  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:01.075366  103439 type.go:168] "Request Body" body=""
	I1002 20:54:01.075454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:01.075825  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:01.575447  103439 type.go:168] "Request Body" body=""
	I1002 20:54:01.575533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:01.575904  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:02.075556  103439 type.go:168] "Request Body" body=""
	I1002 20:54:02.075644  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:02.076053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:02.076132  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:02.575602  103439 type.go:168] "Request Body" body=""
	I1002 20:54:02.575678  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:02.576035  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:03.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:03.075713  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:03.076098  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:03.575655  103439 type.go:168] "Request Body" body=""
	I1002 20:54:03.575732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:03.576098  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:04.075645  103439 type.go:168] "Request Body" body=""
	I1002 20:54:04.075732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:04.076102  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:04.076162  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:04.575674  103439 type.go:168] "Request Body" body=""
	I1002 20:54:04.575774  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:04.576120  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:05.075981  103439 type.go:168] "Request Body" body=""
	I1002 20:54:05.076063  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:05.076424  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:05.576045  103439 type.go:168] "Request Body" body=""
	I1002 20:54:05.576128  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:05.576498  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:06.076278  103439 type.go:168] "Request Body" body=""
	I1002 20:54:06.076361  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:06.076719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:06.076815  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:06.575347  103439 type.go:168] "Request Body" body=""
	I1002 20:54:06.575428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:06.575821  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:07.075435  103439 type.go:168] "Request Body" body=""
	I1002 20:54:07.075516  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:07.075897  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:07.575486  103439 type.go:168] "Request Body" body=""
	I1002 20:54:07.575563  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:07.575958  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:08.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:08.075701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:08.076060  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:08.575979  103439 type.go:168] "Request Body" body=""
	I1002 20:54:08.576066  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:08.576467  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:08.576529  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:09.076208  103439 type.go:168] "Request Body" body=""
	I1002 20:54:09.076292  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:09.076707  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:09.576320  103439 type.go:168] "Request Body" body=""
	I1002 20:54:09.576395  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:09.576817  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:10.075592  103439 type.go:168] "Request Body" body=""
	I1002 20:54:10.075669  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:10.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:10.575606  103439 type.go:168] "Request Body" body=""
	I1002 20:54:10.575688  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:10.576056  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:11.075680  103439 type.go:168] "Request Body" body=""
	I1002 20:54:11.075788  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:11.076183  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:11.076274  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:11.575788  103439 type.go:168] "Request Body" body=""
	I1002 20:54:11.575870  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:11.576222  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:12.075860  103439 type.go:168] "Request Body" body=""
	I1002 20:54:12.075940  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:12.076307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:12.575971  103439 type.go:168] "Request Body" body=""
	I1002 20:54:12.576043  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:12.576403  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:13.076171  103439 type.go:168] "Request Body" body=""
	I1002 20:54:13.076258  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:13.076628  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:13.076688  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:13.576261  103439 type.go:168] "Request Body" body=""
	I1002 20:54:13.576339  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:13.576685  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:14.076408  103439 type.go:168] "Request Body" body=""
	I1002 20:54:14.076488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:14.076857  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:14.575484  103439 type.go:168] "Request Body" body=""
	I1002 20:54:14.575582  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:14.575948  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:15.075808  103439 type.go:168] "Request Body" body=""
	I1002 20:54:15.075891  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:15.076275  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:15.575894  103439 type.go:168] "Request Body" body=""
	I1002 20:54:15.575975  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:15.576435  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:15.576516  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:16.076119  103439 type.go:168] "Request Body" body=""
	I1002 20:54:16.076226  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:16.076603  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:16.576326  103439 type.go:168] "Request Body" body=""
	I1002 20:54:16.576403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:16.576788  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:17.075351  103439 type.go:168] "Request Body" body=""
	I1002 20:54:17.075430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:17.075787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:17.575401  103439 type.go:168] "Request Body" body=""
	I1002 20:54:17.575559  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:17.575961  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:18.075538  103439 type.go:168] "Request Body" body=""
	I1002 20:54:18.075619  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:18.075997  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:18.076063  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:18.575954  103439 type.go:168] "Request Body" body=""
	I1002 20:54:18.576031  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:18.576391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:19.076057  103439 type.go:168] "Request Body" body=""
	I1002 20:54:19.076145  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:19.076521  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:19.576266  103439 type.go:168] "Request Body" body=""
	I1002 20:54:19.576354  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:19.576728  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:20.075522  103439 type.go:168] "Request Body" body=""
	I1002 20:54:20.075613  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:20.075992  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:20.575620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:20.575699  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:20.576111  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:20.576172  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:21.075690  103439 type.go:168] "Request Body" body=""
	I1002 20:54:21.075834  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:21.076211  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:21.575853  103439 type.go:168] "Request Body" body=""
	I1002 20:54:21.575938  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:21.576327  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:22.076012  103439 type.go:168] "Request Body" body=""
	I1002 20:54:22.076106  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:22.076455  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:22.576180  103439 type.go:168] "Request Body" body=""
	I1002 20:54:22.576267  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:22.576639  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:22.576703  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:23.076280  103439 type.go:168] "Request Body" body=""
	I1002 20:54:23.076362  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:23.076729  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:23.575332  103439 type.go:168] "Request Body" body=""
	I1002 20:54:23.575409  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:23.575788  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:24.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:54:24.075455  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:24.075827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:24.575436  103439 type.go:168] "Request Body" body=""
	I1002 20:54:24.575524  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:24.575897  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:25.075680  103439 type.go:168] "Request Body" body=""
	I1002 20:54:25.075782  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:25.076141  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:25.076204  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:25.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:54:25.575836  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:25.576238  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:26.075827  103439 type.go:168] "Request Body" body=""
	I1002 20:54:26.075905  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:26.076277  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:26.576092  103439 type.go:168] "Request Body" body=""
	I1002 20:54:26.576245  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:26.576650  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:27.076357  103439 type.go:168] "Request Body" body=""
	I1002 20:54:27.076442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:27.076807  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:27.076864  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:27.575463  103439 type.go:168] "Request Body" body=""
	I1002 20:54:27.575541  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:27.576016  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:28.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:28.075717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:28.076117  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:28.576130  103439 type.go:168] "Request Body" body=""
	I1002 20:54:28.576214  103439 node_ready.go:38] duration metric: took 6m0.001003861s for node "functional-012915" to be "Ready" ...
	I1002 20:54:28.579396  103439 out.go:203] 
	W1002 20:54:28.581273  103439 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:54:28.581294  103439 out.go:285] * 
	W1002 20:54:28.583020  103439 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:54:28.584974  103439 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:54:19 functional-012915 crio[2919]: time="2025-10-02T20:54:19.885114017Z" level=info msg="createCtr: deleting container 16564a8f8036bc7c90ccf24d061c487f09a6b071956df918122e4f456fc0e7c6 from storage" id=e74c4936-85a6-40d8-b6dd-479d3713227a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:19 functional-012915 crio[2919]: time="2025-10-02T20:54:19.888920116Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-012915_kube-system_7e750209f40bc1241cc38d19476e612c_0" id=dfc199da-232e-450b-83c4-4863712b12ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:19 functional-012915 crio[2919]: time="2025-10-02T20:54:19.889319932Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-012915_kube-system_8a66ab49d7c80b396ab0e8b46c39b696_0" id=e74c4936-85a6-40d8-b6dd-479d3713227a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.855296992Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=94437663-21a7-4f9b-8633-2d64066323f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.856192975Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=462c0d6e-8b0b-4a2b-9c40-b6510da69b60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.856992012Z" level=info msg="Creating container: kube-system/etcd-functional-012915/etcd" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.857191241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.860550955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.861148899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.876921598Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.878372533Z" level=info msg="createCtr: deleting container ID 77d657b22b129eb4d802555132e0f22eec77d8bb32503612919b7da6337e7b56 from idIndex" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.878412292Z" level=info msg="createCtr: removing container 77d657b22b129eb4d802555132e0f22eec77d8bb32503612919b7da6337e7b56" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.878448047Z" level=info msg="createCtr: deleting container 77d657b22b129eb4d802555132e0f22eec77d8bb32503612919b7da6337e7b56 from storage" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.880726986Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-012915_kube-system_d8a261ecdc32dae77705c4d6c0276f2f_0" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.855493992Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=481ac10c-de12-458c-abb1-8096200aa5b5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.856500268Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f651de02-7be7-42fd-87f9-0472131057d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.857511372Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-012915/kube-apiserver" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.857835372Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.862332144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.862929501Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.878752653Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.880391279Z" level=info msg="createCtr: deleting container ID d8faf932eb44fdb196b9250632b1530f83d306077ca2c3817efaa5544ccf0842 from idIndex" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.880428353Z" level=info msg="createCtr: removing container d8faf932eb44fdb196b9250632b1530f83d306077ca2c3817efaa5544ccf0842" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.8804592Z" level=info msg="createCtr: deleting container d8faf932eb44fdb196b9250632b1530f83d306077ca2c3817efaa5544ccf0842 from storage" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.882638615Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-012915_kube-system_71bc375daf4e76699563858eee44bc44_0" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:54:30.287993    4337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:30.288543    4337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:30.290201    4337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:30.290614    4337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:30.292391    4337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:54:30 up  2:36,  0 user,  load average: 0.02, 0.03, 0.32
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:54:19 functional-012915 kubelet[1773]: E1002 20:54:19.890816    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-012915" podUID="8a66ab49d7c80b396ab0e8b46c39b696"
	Oct 02 20:54:20 functional-012915 kubelet[1773]: E1002 20:54:20.317643    1773 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-012915.186ac76a13674072\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac76a13674072  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 20:44:22.84759461 +0000 UTC m=+0.324743301,LastTimestamp:2025-10-02 20:44:22.84910367 +0000 UTC m=+0.326252362,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingIn
stance:functional-012915,}"
	Oct 02 20:54:22 functional-012915 kubelet[1773]: E1002 20:54:22.896755    1773 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-012915\" not found"
	Oct 02 20:54:23 functional-012915 kubelet[1773]: E1002 20:54:23.537408    1773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:54:23 functional-012915 kubelet[1773]: I1002 20:54:23.741320    1773 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 20:54:23 functional-012915 kubelet[1773]: E1002 20:54:23.741779    1773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 20:54:27 functional-012915 kubelet[1773]: E1002 20:54:27.854851    1773 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 20:54:27 functional-012915 kubelet[1773]: E1002 20:54:27.881041    1773 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:54:27 functional-012915 kubelet[1773]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:27 functional-012915 kubelet[1773]:  > podSandboxID="585b4230bcb56046e825d4238227e61b36dc2e8921ea6147c622b6bed61a91bf"
	Oct 02 20:54:27 functional-012915 kubelet[1773]: E1002 20:54:27.881140    1773 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:54:27 functional-012915 kubelet[1773]:         container etcd start failed in pod etcd-functional-012915_kube-system(d8a261ecdc32dae77705c4d6c0276f2f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:27 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:54:27 functional-012915 kubelet[1773]: E1002 20:54:27.881170    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-012915" podUID="d8a261ecdc32dae77705c4d6c0276f2f"
	Oct 02 20:54:29 functional-012915 kubelet[1773]: E1002 20:54:29.324986    1773 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 20:54:29 functional-012915 kubelet[1773]: E1002 20:54:29.855017    1773 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 20:54:29 functional-012915 kubelet[1773]: E1002 20:54:29.882979    1773 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:54:29 functional-012915 kubelet[1773]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:29 functional-012915 kubelet[1773]:  > podSandboxID="c697c06eaaf20ef2888311ed130f6d0dab82776628f2d6e3d184e9abb1e08331"
	Oct 02 20:54:29 functional-012915 kubelet[1773]: E1002 20:54:29.883098    1773 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:54:29 functional-012915 kubelet[1773]:         container kube-apiserver start failed in pod kube-apiserver-functional-012915_kube-system(71bc375daf4e76699563858eee44bc44): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:29 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:54:29 functional-012915 kubelet[1773]: E1002 20:54:29.883130    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-012915" podUID="71bc375daf4e76699563858eee44bc44"
	Oct 02 20:54:30 functional-012915 kubelet[1773]: E1002 20:54:30.010432    1773 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 20:54:30 functional-012915 kubelet[1773]: E1002 20:54:30.319199    1773 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-012915.186ac76a13674072\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac76a13674072  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 20:44:22.84759461 +0000 UTC m=+0.324743301,LastTimestamp:2025-10-02 20:44:22.84910367 +0000 UTC m=+0.326252362,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingIn
stance:functional-012915,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (310.969057ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (366.38s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-012915 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-012915 get po -A: exit status 1 (57.246893ms)

                                                
                                                
** stderr ** 
	E1002 20:54:31.237326  107085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:31.237675  107085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:31.239124  107085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:31.239476  107085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:31.240911  107085 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-012915 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1002 20:54:31.237326  107085 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 20:54:31.237675  107085 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 20:54:31.239124  107085 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 20:54:31.239476  107085 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nE1002 20:54:31.240911  107085 memc
ache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://192.168.49.2:8441/api?timeout=32s\\\": dial tcp 192.168.49.2:8441: connect: connection refused\"\nThe connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-012915 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-012915 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (297.004221ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-072312                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-072312   │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ start   │ --download-only -p download-docker-272222 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-272222 │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ delete  │ -p download-docker-272222                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-272222 │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ start   │ --download-only -p binary-mirror-809560 --alsologtostderr --binary-mirror http://127.0.0.1:39541 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-809560   │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ delete  │ -p binary-mirror-809560                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-809560   │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ addons  │ disable dashboard -p addons-436069                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ addons  │ enable dashboard -p addons-436069                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ start   │ -p addons-436069 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ delete  │ -p addons-436069                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-436069          │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │ 02 Oct 25 20:31 UTC │
	│ start   │ -p nospam-461767 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-461767 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                  │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:31 UTC │                     │
	│ start   │ nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ start   │ nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ start   │ nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run                                                                                                                                                                                                                                                                                                                                                                                                               │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │                     │
	│ pause   │ nospam-461767 --log_dir /tmp/nospam-461767 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ pause   │ nospam-461767 --log_dir /tmp/nospam-461767 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ pause   │ nospam-461767 --log_dir /tmp/nospam-461767 pause                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                                                                                                                                                                                                                                                                                                                                                                                       │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                                                                                                                                                                                                                                                                                                                                                                                          │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ delete  │ -p nospam-461767                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ nospam-461767          │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ start   │ -p functional-012915 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                            │ functional-012915      │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │                     │
	│ start   │ -p functional-012915 --alsologtostderr -v=8                                                                                                                                                                                                                                                                                                                                                                                                                              │ functional-012915      │ jenkins │ v1.37.0 │ 02 Oct 25 20:48 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:48:24
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:48:24.799042  103439 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:48:24.799301  103439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:24.799310  103439 out.go:374] Setting ErrFile to fd 2...
	I1002 20:48:24.799319  103439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:24.799517  103439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:48:24.799997  103439 out.go:368] Setting JSON to false
	I1002 20:48:24.800864  103439 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9046,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:48:24.800953  103439 start.go:140] virtualization: kvm guest
	I1002 20:48:24.803402  103439 out.go:179] * [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:48:24.804691  103439 notify.go:220] Checking for updates...
	I1002 20:48:24.804714  103439 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:48:24.806239  103439 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:48:24.807535  103439 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:24.808966  103439 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:48:24.810229  103439 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:48:24.811490  103439 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:48:24.813239  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:24.813364  103439 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:48:24.837336  103439 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:48:24.837438  103439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:48:24.897484  103439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:48:24.886469072 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:48:24.897616  103439 docker.go:318] overlay module found
	I1002 20:48:24.900384  103439 out.go:179] * Using the docker driver based on existing profile
	I1002 20:48:24.901640  103439 start.go:304] selected driver: docker
	I1002 20:48:24.901656  103439 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:24.901817  103439 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:48:24.901921  103439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:48:24.957281  103439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:48:24.94713494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:48:24.957915  103439 cni.go:84] Creating CNI manager for ""
	I1002 20:48:24.957982  103439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:48:24.958030  103439 start.go:348] cluster config:
	{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:24.959902  103439 out.go:179] * Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	I1002 20:48:24.961424  103439 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 20:48:24.962912  103439 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:48:24.964111  103439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:48:24.964148  103439 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:48:24.964157  103439 cache.go:58] Caching tarball of preloaded images
	I1002 20:48:24.964205  103439 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:48:24.964264  103439 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:48:24.964275  103439 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:48:24.964363  103439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json ...
	I1002 20:48:24.984848  103439 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:48:24.984867  103439 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:48:24.984883  103439 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:48:24.984905  103439 start.go:360] acquireMachinesLock for functional-012915: {Name:mk05b0465db6f8234fcb55c21a78a37886923b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:48:24.984974  103439 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "functional-012915"
	I1002 20:48:24.984991  103439 start.go:96] Skipping create...Using existing machine configuration
	I1002 20:48:24.984998  103439 fix.go:54] fixHost starting: 
	I1002 20:48:24.985199  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:25.001871  103439 fix.go:112] recreateIfNeeded on functional-012915: state=Running err=<nil>
	W1002 20:48:25.001898  103439 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 20:48:25.003929  103439 out.go:252] * Updating the running docker "functional-012915" container ...
	I1002 20:48:25.003964  103439 machine.go:93] provisionDockerMachine start ...
	I1002 20:48:25.004037  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.020996  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.021230  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.021243  103439 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:48:25.163676  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:48:25.163710  103439 ubuntu.go:182] provisioning hostname "functional-012915"
	I1002 20:48:25.163781  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.181773  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.181995  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.182012  103439 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-012915 && echo "functional-012915" | sudo tee /etc/hostname
	I1002 20:48:25.333959  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:48:25.334023  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.352331  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.352586  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.352605  103439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-012915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-012915/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-012915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:48:25.495627  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:48:25.495660  103439 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 20:48:25.495680  103439 ubuntu.go:190] setting up certificates
	I1002 20:48:25.495691  103439 provision.go:84] configureAuth start
	I1002 20:48:25.495761  103439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:48:25.513229  103439 provision.go:143] copyHostCerts
	I1002 20:48:25.513269  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:48:25.513297  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 20:48:25.513309  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:48:25.513378  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 20:48:25.513471  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:48:25.513489  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 20:48:25.513496  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:48:25.513524  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 20:48:25.513585  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:48:25.513606  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 20:48:25.513612  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:48:25.513642  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 20:48:25.513706  103439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.functional-012915 san=[127.0.0.1 192.168.49.2 functional-012915 localhost minikube]
	I1002 20:48:25.699700  103439 provision.go:177] copyRemoteCerts
	I1002 20:48:25.699774  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:48:25.699818  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.717132  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:25.819529  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:48:25.819590  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:48:25.836961  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:48:25.837026  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:48:25.853991  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:48:25.854053  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:48:25.872348  103439 provision.go:87] duration metric: took 376.642239ms to configureAuth
	I1002 20:48:25.872378  103439 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:48:25.872536  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:25.872653  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.891454  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.891685  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.891706  103439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:48:26.156804  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:48:26.156829  103439 machine.go:96] duration metric: took 1.152858016s to provisionDockerMachine
	I1002 20:48:26.156858  103439 start.go:293] postStartSetup for "functional-012915" (driver="docker")
	I1002 20:48:26.156868  103439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:48:26.156920  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:48:26.156969  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.176188  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.278892  103439 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:48:26.282350  103439 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 20:48:26.282380  103439 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 20:48:26.282385  103439 command_runner.go:130] > VERSION_ID="12"
	I1002 20:48:26.282389  103439 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 20:48:26.282393  103439 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 20:48:26.282397  103439 command_runner.go:130] > ID=debian
	I1002 20:48:26.282401  103439 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 20:48:26.282406  103439 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 20:48:26.282410  103439 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 20:48:26.282454  103439 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:48:26.282471  103439 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:48:26.282480  103439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 20:48:26.282532  103439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 20:48:26.282613  103439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 20:48:26.282622  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 20:48:26.282689  103439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> hosts in /etc/test/nested/copy/84100
	I1002 20:48:26.282696  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> /etc/test/nested/copy/84100/hosts
	I1002 20:48:26.282728  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/84100
	I1002 20:48:26.291027  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:48:26.308674  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts --> /etc/test/nested/copy/84100/hosts (40 bytes)
	I1002 20:48:26.325806  103439 start.go:296] duration metric: took 168.930408ms for postStartSetup
	I1002 20:48:26.325916  103439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:48:26.325957  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.343664  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.443702  103439 command_runner.go:130] > 54%
	I1002 20:48:26.443812  103439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:48:26.449039  103439 command_runner.go:130] > 135G
	I1002 20:48:26.449077  103439 fix.go:56] duration metric: took 1.464076482s for fixHost
	I1002 20:48:26.449092  103439 start.go:83] releasing machines lock for "functional-012915", held for 1.464107586s
	I1002 20:48:26.449173  103439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:48:26.467196  103439 ssh_runner.go:195] Run: cat /version.json
	I1002 20:48:26.467258  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.467342  103439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:48:26.467420  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.485438  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.485701  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.633417  103439 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 20:48:26.635353  103439 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 20:48:26.635549  103439 ssh_runner.go:195] Run: systemctl --version
	I1002 20:48:26.642439  103439 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 20:48:26.642484  103439 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 20:48:26.642544  103439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:48:26.678549  103439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 20:48:26.683206  103439 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 20:48:26.683277  103439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:48:26.683333  103439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:48:26.691349  103439 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:48:26.691374  103439 start.go:495] detecting cgroup driver to use...
	I1002 20:48:26.691404  103439 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:48:26.691448  103439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:48:26.705612  103439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:48:26.718317  103439 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:48:26.718372  103439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:48:26.732790  103439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:48:26.745127  103439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:48:26.830208  103439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:48:26.916089  103439 docker.go:234] disabling docker service ...
	I1002 20:48:26.916158  103439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:48:26.931041  103439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:48:26.944314  103439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:48:27.029050  103439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:48:27.113127  103439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:48:27.125650  103439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:48:27.138813  103439 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 20:48:27.139624  103439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:48:27.139683  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.148622  103439 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:48:27.148678  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.157772  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.166537  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.175276  103439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:48:27.183311  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.192091  103439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.200250  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.208827  103439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:48:27.216057  103439 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 20:48:27.216134  103439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:48:27.223341  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:27.309631  103439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:48:27.427286  103439 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:48:27.427366  103439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:48:27.431839  103439 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 20:48:27.431866  103439 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 20:48:27.431885  103439 command_runner.go:130] > Device: 0,59	Inode: 3822        Links: 1
	I1002 20:48:27.431892  103439 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:48:27.431897  103439 command_runner.go:130] > Access: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431903  103439 command_runner.go:130] > Modify: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431907  103439 command_runner.go:130] > Change: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431912  103439 command_runner.go:130] >  Birth: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431962  103439 start.go:563] Will wait 60s for crictl version
	I1002 20:48:27.432014  103439 ssh_runner.go:195] Run: which crictl
	I1002 20:48:27.435939  103439 command_runner.go:130] > /usr/local/bin/crictl
	I1002 20:48:27.436036  103439 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:48:27.458416  103439 command_runner.go:130] > Version:  0.1.0
	I1002 20:48:27.458438  103439 command_runner.go:130] > RuntimeName:  cri-o
	I1002 20:48:27.458443  103439 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 20:48:27.458448  103439 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 20:48:27.460155  103439 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:48:27.460222  103439 ssh_runner.go:195] Run: crio --version
	I1002 20:48:27.486159  103439 command_runner.go:130] > crio version 1.34.1
	I1002 20:48:27.486183  103439 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:48:27.486190  103439 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:48:27.486198  103439 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:48:27.486205  103439 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:48:27.486212  103439 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:48:27.486219  103439 command_runner.go:130] >    Compiler:       gc
	I1002 20:48:27.486226  103439 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:48:27.486237  103439 command_runner.go:130] >    Linkmode:       static
	I1002 20:48:27.486246  103439 command_runner.go:130] >    BuildTags:
	I1002 20:48:27.486251  103439 command_runner.go:130] >      static
	I1002 20:48:27.486259  103439 command_runner.go:130] >      netgo
	I1002 20:48:27.486263  103439 command_runner.go:130] >      osusergo
	I1002 20:48:27.486266  103439 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:48:27.486272  103439 command_runner.go:130] >      seccomp
	I1002 20:48:27.486276  103439 command_runner.go:130] >      apparmor
	I1002 20:48:27.486300  103439 command_runner.go:130] >      selinux
	I1002 20:48:27.486312  103439 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:48:27.486330  103439 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:48:27.486339  103439 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:48:27.487532  103439 ssh_runner.go:195] Run: crio --version
	I1002 20:48:27.514593  103439 command_runner.go:130] > crio version 1.34.1
	I1002 20:48:27.514624  103439 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:48:27.514630  103439 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:48:27.514634  103439 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:48:27.514639  103439 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:48:27.514643  103439 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:48:27.514647  103439 command_runner.go:130] >    Compiler:       gc
	I1002 20:48:27.514654  103439 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:48:27.514658  103439 command_runner.go:130] >    Linkmode:       static
	I1002 20:48:27.514662  103439 command_runner.go:130] >    BuildTags:
	I1002 20:48:27.514665  103439 command_runner.go:130] >      static
	I1002 20:48:27.514668  103439 command_runner.go:130] >      netgo
	I1002 20:48:27.514677  103439 command_runner.go:130] >      osusergo
	I1002 20:48:27.514685  103439 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:48:27.514688  103439 command_runner.go:130] >      seccomp
	I1002 20:48:27.514691  103439 command_runner.go:130] >      apparmor
	I1002 20:48:27.514695  103439 command_runner.go:130] >      selinux
	I1002 20:48:27.514699  103439 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:48:27.514706  103439 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:48:27.514709  103439 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:48:27.516768  103439 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:48:27.518063  103439 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:48:27.535001  103439 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:48:27.539645  103439 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 20:48:27.539759  103439 kubeadm.go:883] updating cluster {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:48:27.539875  103439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:48:27.539928  103439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:48:27.571471  103439 command_runner.go:130] > {
	I1002 20:48:27.571489  103439 command_runner.go:130] >   "images":  [
	I1002 20:48:27.571493  103439 command_runner.go:130] >     {
	I1002 20:48:27.571502  103439 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:48:27.571507  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571513  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:48:27.571516  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571520  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571528  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:48:27.571535  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:48:27.571539  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571543  103439 command_runner.go:130] >       "size":  "109379124",
	I1002 20:48:27.571547  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571554  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571560  103439 command_runner.go:130] >     },
	I1002 20:48:27.571568  103439 command_runner.go:130] >     {
	I1002 20:48:27.571574  103439 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:48:27.571577  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571583  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:48:27.571588  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571592  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571600  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:48:27.571610  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:48:27.571616  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571620  103439 command_runner.go:130] >       "size":  "31470524",
	I1002 20:48:27.571626  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571633  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571644  103439 command_runner.go:130] >     },
	I1002 20:48:27.571650  103439 command_runner.go:130] >     {
	I1002 20:48:27.571656  103439 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:48:27.571662  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571667  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:48:27.571672  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571676  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571685  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:48:27.571694  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:48:27.571700  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571704  103439 command_runner.go:130] >       "size":  "76103547",
	I1002 20:48:27.571710  103439 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:48:27.571714  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571719  103439 command_runner.go:130] >     },
	I1002 20:48:27.571721  103439 command_runner.go:130] >     {
	I1002 20:48:27.571727  103439 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:48:27.571733  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571752  103439 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:48:27.571758  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571767  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571778  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:48:27.571787  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:48:27.571792  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571796  103439 command_runner.go:130] >       "size":  "195976448",
	I1002 20:48:27.571802  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571805  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.571810  103439 command_runner.go:130] >       },
	I1002 20:48:27.571824  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571831  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571834  103439 command_runner.go:130] >     },
	I1002 20:48:27.571838  103439 command_runner.go:130] >     {
	I1002 20:48:27.571844  103439 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:48:27.571850  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571859  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:48:27.571866  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571870  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571879  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:48:27.571888  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:48:27.571894  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571898  103439 command_runner.go:130] >       "size":  "89046001",
	I1002 20:48:27.571903  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571907  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.571913  103439 command_runner.go:130] >       },
	I1002 20:48:27.571916  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571922  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571925  103439 command_runner.go:130] >     },
	I1002 20:48:27.571931  103439 command_runner.go:130] >     {
	I1002 20:48:27.571937  103439 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:48:27.571943  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571948  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:48:27.571953  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571957  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571967  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:48:27.571976  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:48:27.571981  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571985  103439 command_runner.go:130] >       "size":  "76004181",
	I1002 20:48:27.571991  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571994  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.572000  103439 command_runner.go:130] >       },
	I1002 20:48:27.572003  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572009  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572012  103439 command_runner.go:130] >     },
	I1002 20:48:27.572015  103439 command_runner.go:130] >     {
	I1002 20:48:27.572023  103439 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:48:27.572027  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572038  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:48:27.572048  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572054  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572061  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:48:27.572070  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:48:27.572076  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572080  103439 command_runner.go:130] >       "size":  "73138073",
	I1002 20:48:27.572085  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572089  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572095  103439 command_runner.go:130] >     },
	I1002 20:48:27.572098  103439 command_runner.go:130] >     {
	I1002 20:48:27.572106  103439 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:48:27.572109  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572114  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:48:27.572119  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572123  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572132  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:48:27.572157  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:48:27.572163  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572167  103439 command_runner.go:130] >       "size":  "53844823",
	I1002 20:48:27.572172  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.572175  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.572180  103439 command_runner.go:130] >       },
	I1002 20:48:27.572184  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572189  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572192  103439 command_runner.go:130] >     },
	I1002 20:48:27.572197  103439 command_runner.go:130] >     {
	I1002 20:48:27.572203  103439 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:48:27.572206  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572213  103439 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.572217  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572222  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572229  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:48:27.572237  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:48:27.572248  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572254  103439 command_runner.go:130] >       "size":  "742092",
	I1002 20:48:27.572258  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.572263  103439 command_runner.go:130] >         "value":  "65535"
	I1002 20:48:27.572267  103439 command_runner.go:130] >       },
	I1002 20:48:27.572273  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572282  103439 command_runner.go:130] >       "pinned":  true
	I1002 20:48:27.572288  103439 command_runner.go:130] >     }
	I1002 20:48:27.572291  103439 command_runner.go:130] >   ]
	I1002 20:48:27.572295  103439 command_runner.go:130] > }
	I1002 20:48:27.573606  103439 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:48:27.573628  103439 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:48:27.573687  103439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:48:27.599395  103439 command_runner.go:130] > {
	I1002 20:48:27.599418  103439 command_runner.go:130] >   "images":  [
	I1002 20:48:27.599424  103439 command_runner.go:130] >     {
	I1002 20:48:27.599434  103439 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:48:27.599439  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599447  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:48:27.599452  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599460  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599473  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:48:27.599500  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:48:27.599510  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599518  103439 command_runner.go:130] >       "size":  "109379124",
	I1002 20:48:27.599526  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599540  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599549  103439 command_runner.go:130] >     },
	I1002 20:48:27.599555  103439 command_runner.go:130] >     {
	I1002 20:48:27.599575  103439 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:48:27.599582  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599590  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:48:27.599596  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599604  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599624  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:48:27.599640  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:48:27.599648  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599656  103439 command_runner.go:130] >       "size":  "31470524",
	I1002 20:48:27.599664  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599676  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599684  103439 command_runner.go:130] >     },
	I1002 20:48:27.599690  103439 command_runner.go:130] >     {
	I1002 20:48:27.599703  103439 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:48:27.599713  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599722  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:48:27.599730  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599754  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599770  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:48:27.599783  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:48:27.599791  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599798  103439 command_runner.go:130] >       "size":  "76103547",
	I1002 20:48:27.599808  103439 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:48:27.599815  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599823  103439 command_runner.go:130] >     },
	I1002 20:48:27.599829  103439 command_runner.go:130] >     {
	I1002 20:48:27.599840  103439 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:48:27.599849  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599858  103439 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:48:27.599865  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599873  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599887  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:48:27.599901  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:48:27.599918  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599927  103439 command_runner.go:130] >       "size":  "195976448",
	I1002 20:48:27.599934  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.599942  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.599948  103439 command_runner.go:130] >       },
	I1002 20:48:27.599974  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599984  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599989  103439 command_runner.go:130] >     },
	I1002 20:48:27.599994  103439 command_runner.go:130] >     {
	I1002 20:48:27.600004  103439 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:48:27.600013  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600021  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:48:27.600029  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600036  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600050  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:48:27.600065  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:48:27.600073  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600080  103439 command_runner.go:130] >       "size":  "89046001",
	I1002 20:48:27.600089  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600103  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600112  103439 command_runner.go:130] >       },
	I1002 20:48:27.600119  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600128  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600134  103439 command_runner.go:130] >     },
	I1002 20:48:27.600142  103439 command_runner.go:130] >     {
	I1002 20:48:27.600152  103439 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:48:27.600161  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600171  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:48:27.600179  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600185  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600199  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:48:27.600213  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:48:27.600220  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600233  103439 command_runner.go:130] >       "size":  "76004181",
	I1002 20:48:27.600242  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600250  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600258  103439 command_runner.go:130] >       },
	I1002 20:48:27.600264  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600273  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600278  103439 command_runner.go:130] >     },
	I1002 20:48:27.600284  103439 command_runner.go:130] >     {
	I1002 20:48:27.600297  103439 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:48:27.600306  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600315  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:48:27.600332  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600339  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600354  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:48:27.600368  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:48:27.600376  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600383  103439 command_runner.go:130] >       "size":  "73138073",
	I1002 20:48:27.600393  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600401  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600410  103439 command_runner.go:130] >     },
	I1002 20:48:27.600415  103439 command_runner.go:130] >     {
	I1002 20:48:27.600423  103439 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:48:27.600428  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600437  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:48:27.600446  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600452  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600464  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:48:27.600497  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:48:27.600505  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600513  103439 command_runner.go:130] >       "size":  "53844823",
	I1002 20:48:27.600520  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600527  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600536  103439 command_runner.go:130] >       },
	I1002 20:48:27.600554  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600563  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600569  103439 command_runner.go:130] >     },
	I1002 20:48:27.600574  103439 command_runner.go:130] >     {
	I1002 20:48:27.600585  103439 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:48:27.600594  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600603  103439 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.600611  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600618  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600631  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:48:27.600643  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:48:27.600652  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600659  103439 command_runner.go:130] >       "size":  "742092",
	I1002 20:48:27.600668  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600676  103439 command_runner.go:130] >         "value":  "65535"
	I1002 20:48:27.600684  103439 command_runner.go:130] >       },
	I1002 20:48:27.600692  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600701  103439 command_runner.go:130] >       "pinned":  true
	I1002 20:48:27.600708  103439 command_runner.go:130] >     }
	I1002 20:48:27.600716  103439 command_runner.go:130] >   ]
	I1002 20:48:27.600721  103439 command_runner.go:130] > }
	I1002 20:48:27.600844  103439 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:48:27.600859  103439 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:48:27.600868  103439 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:48:27.600982  103439 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:48:27.601057  103439 ssh_runner.go:195] Run: crio config
	I1002 20:48:27.642390  103439 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 20:48:27.642423  103439 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 20:48:27.642435  103439 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 20:48:27.642439  103439 command_runner.go:130] > #
	I1002 20:48:27.642450  103439 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 20:48:27.642460  103439 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 20:48:27.642470  103439 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 20:48:27.642501  103439 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 20:48:27.642510  103439 command_runner.go:130] > # reload'.
	I1002 20:48:27.642520  103439 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 20:48:27.642532  103439 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 20:48:27.642543  103439 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 20:48:27.642558  103439 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 20:48:27.642563  103439 command_runner.go:130] > [crio]
	I1002 20:48:27.642572  103439 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 20:48:27.642580  103439 command_runner.go:130] > # containers images, in this directory.
	I1002 20:48:27.642602  103439 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 20:48:27.642618  103439 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 20:48:27.642627  103439 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 20:48:27.642637  103439 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 20:48:27.642643  103439 command_runner.go:130] > # imagestore = ""
	I1002 20:48:27.642656  103439 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 20:48:27.642670  103439 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 20:48:27.642681  103439 command_runner.go:130] > # storage_driver = "overlay"
	I1002 20:48:27.642691  103439 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 20:48:27.642708  103439 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 20:48:27.642715  103439 command_runner.go:130] > # storage_option = [
	I1002 20:48:27.642723  103439 command_runner.go:130] > # ]
	I1002 20:48:27.642733  103439 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 20:48:27.642762  103439 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 20:48:27.642770  103439 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 20:48:27.642783  103439 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 20:48:27.642796  103439 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 20:48:27.642804  103439 command_runner.go:130] > # always happen on a node reboot
	I1002 20:48:27.642814  103439 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 20:48:27.642844  103439 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 20:48:27.642859  103439 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 20:48:27.642869  103439 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 20:48:27.642883  103439 command_runner.go:130] > # version_file_persist = ""
	I1002 20:48:27.642895  103439 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 20:48:27.642919  103439 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 20:48:27.642930  103439 command_runner.go:130] > # internal_wipe = true
	I1002 20:48:27.642942  103439 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 20:48:27.642957  103439 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 20:48:27.642963  103439 command_runner.go:130] > # internal_repair = true
	I1002 20:48:27.642972  103439 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 20:48:27.642981  103439 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 20:48:27.642990  103439 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 20:48:27.642998  103439 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 20:48:27.643012  103439 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 20:48:27.643018  103439 command_runner.go:130] > [crio.api]
	I1002 20:48:27.643028  103439 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 20:48:27.643038  103439 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 20:48:27.643047  103439 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 20:48:27.643058  103439 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 20:48:27.643068  103439 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 20:48:27.643081  103439 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 20:48:27.643088  103439 command_runner.go:130] > # stream_port = "0"
	I1002 20:48:27.643100  103439 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 20:48:27.643107  103439 command_runner.go:130] > # stream_enable_tls = false
	I1002 20:48:27.643117  103439 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 20:48:27.643126  103439 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 20:48:27.643137  103439 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 20:48:27.643149  103439 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 20:48:27.643154  103439 command_runner.go:130] > # stream_tls_cert = ""
	I1002 20:48:27.643169  103439 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 20:48:27.643178  103439 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 20:48:27.643188  103439 command_runner.go:130] > # stream_tls_key = ""
	I1002 20:48:27.643205  103439 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 20:48:27.643218  103439 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 20:48:27.643228  103439 command_runner.go:130] > # automatically pick up the changes.
	I1002 20:48:27.643241  103439 command_runner.go:130] > # stream_tls_ca = ""
	I1002 20:48:27.643279  103439 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:48:27.643300  103439 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 20:48:27.643322  103439 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:48:27.643333  103439 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 20:48:27.643343  103439 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 20:48:27.643352  103439 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 20:48:27.643370  103439 command_runner.go:130] > [crio.runtime]
	I1002 20:48:27.643381  103439 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 20:48:27.643393  103439 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 20:48:27.643403  103439 command_runner.go:130] > # "nofile=1024:2048"
	I1002 20:48:27.643414  103439 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 20:48:27.643423  103439 command_runner.go:130] > # default_ulimits = [
	I1002 20:48:27.643428  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643441  103439 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 20:48:27.643450  103439 command_runner.go:130] > # no_pivot = false
	I1002 20:48:27.643460  103439 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 20:48:27.643473  103439 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 20:48:27.643482  103439 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 20:48:27.643494  103439 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 20:48:27.643511  103439 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 20:48:27.643524  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:48:27.643532  103439 command_runner.go:130] > # conmon = ""
	I1002 20:48:27.643539  103439 command_runner.go:130] > # Cgroup setting for conmon
	I1002 20:48:27.643549  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 20:48:27.643556  103439 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 20:48:27.643565  103439 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 20:48:27.643572  103439 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 20:48:27.643582  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:48:27.643588  103439 command_runner.go:130] > # conmon_env = [
	I1002 20:48:27.643592  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643600  103439 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 20:48:27.643612  103439 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 20:48:27.643622  103439 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 20:48:27.643631  103439 command_runner.go:130] > # default_env = [
	I1002 20:48:27.643647  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643661  103439 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 20:48:27.643672  103439 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 20:48:27.643679  103439 command_runner.go:130] > # selinux = false
	I1002 20:48:27.643689  103439 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 20:48:27.643701  103439 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 20:48:27.643710  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643717  103439 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:48:27.643729  103439 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 20:48:27.643755  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643766  103439 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 20:48:27.643777  103439 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 20:48:27.643790  103439 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 20:48:27.643804  103439 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 20:48:27.643815  103439 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 20:48:27.643826  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643834  103439 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 20:48:27.643847  103439 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 20:48:27.643856  103439 command_runner.go:130] > # the cgroup blockio controller.
	I1002 20:48:27.643863  103439 command_runner.go:130] > # blockio_config_file = ""
	I1002 20:48:27.643875  103439 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 20:48:27.643886  103439 command_runner.go:130] > # blockio parameters.
	I1002 20:48:27.643892  103439 command_runner.go:130] > # blockio_reload = false
	I1002 20:48:27.643901  103439 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 20:48:27.643907  103439 command_runner.go:130] > # irqbalance daemon.
	I1002 20:48:27.643914  103439 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 20:48:27.643922  103439 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 20:48:27.643930  103439 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 20:48:27.643939  103439 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 20:48:27.643946  103439 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 20:48:27.643955  103439 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 20:48:27.643967  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643976  103439 command_runner.go:130] > # rdt_config_file = ""
	I1002 20:48:27.643991  103439 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 20:48:27.643998  103439 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 20:48:27.644004  103439 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 20:48:27.644010  103439 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 20:48:27.644016  103439 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 20:48:27.644022  103439 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 20:48:27.644026  103439 command_runner.go:130] > # will be added.
	I1002 20:48:27.644030  103439 command_runner.go:130] > # default_capabilities = [
	I1002 20:48:27.644036  103439 command_runner.go:130] > # 	"CHOWN",
	I1002 20:48:27.644039  103439 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 20:48:27.644042  103439 command_runner.go:130] > # 	"FSETID",
	I1002 20:48:27.644046  103439 command_runner.go:130] > # 	"FOWNER",
	I1002 20:48:27.644049  103439 command_runner.go:130] > # 	"SETGID",
	I1002 20:48:27.644077  103439 command_runner.go:130] > # 	"SETUID",
	I1002 20:48:27.644089  103439 command_runner.go:130] > # 	"SETPCAP",
	I1002 20:48:27.644096  103439 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 20:48:27.644099  103439 command_runner.go:130] > # 	"KILL",
	I1002 20:48:27.644102  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644111  103439 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 20:48:27.644117  103439 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 20:48:27.644124  103439 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 20:48:27.644129  103439 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 20:48:27.644137  103439 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:48:27.644140  103439 command_runner.go:130] > default_sysctls = [
	I1002 20:48:27.644146  103439 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 20:48:27.644149  103439 command_runner.go:130] > ]
	I1002 20:48:27.644153  103439 command_runner.go:130] > # List of devices on the host that a
	I1002 20:48:27.644159  103439 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 20:48:27.644165  103439 command_runner.go:130] > # allowed_devices = [
	I1002 20:48:27.644168  103439 command_runner.go:130] > # 	"/dev/fuse",
	I1002 20:48:27.644172  103439 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 20:48:27.644177  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644181  103439 command_runner.go:130] > # List of additional devices. specified as
	I1002 20:48:27.644194  103439 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 20:48:27.644201  103439 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 20:48:27.644207  103439 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:48:27.644210  103439 command_runner.go:130] > # additional_devices = [
	I1002 20:48:27.644213  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644218  103439 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 20:48:27.644224  103439 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 20:48:27.644227  103439 command_runner.go:130] > # 	"/etc/cdi",
	I1002 20:48:27.644231  103439 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 20:48:27.644235  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644241  103439 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 20:48:27.644249  103439 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 20:48:27.644253  103439 command_runner.go:130] > # Defaults to false.
	I1002 20:48:27.644259  103439 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 20:48:27.644265  103439 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 20:48:27.644272  103439 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 20:48:27.644275  103439 command_runner.go:130] > # hooks_dir = [
	I1002 20:48:27.644280  103439 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 20:48:27.644283  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644289  103439 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 20:48:27.644297  103439 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 20:48:27.644302  103439 command_runner.go:130] > # its default mounts from the following two files:
	I1002 20:48:27.644305  103439 command_runner.go:130] > #
	I1002 20:48:27.644310  103439 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 20:48:27.644323  103439 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 20:48:27.644329  103439 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 20:48:27.644334  103439 command_runner.go:130] > #
	I1002 20:48:27.644340  103439 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 20:48:27.644346  103439 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 20:48:27.644352  103439 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 20:48:27.644356  103439 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 20:48:27.644359  103439 command_runner.go:130] > #
	I1002 20:48:27.644363  103439 command_runner.go:130] > # default_mounts_file = ""
	I1002 20:48:27.644377  103439 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 20:48:27.644385  103439 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 20:48:27.644389  103439 command_runner.go:130] > # pids_limit = -1
	I1002 20:48:27.644397  103439 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 20:48:27.644403  103439 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 20:48:27.644409  103439 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 20:48:27.644418  103439 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 20:48:27.644422  103439 command_runner.go:130] > # log_size_max = -1
	I1002 20:48:27.644430  103439 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 20:48:27.644434  103439 command_runner.go:130] > # log_to_journald = false
	I1002 20:48:27.644439  103439 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 20:48:27.644444  103439 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 20:48:27.644450  103439 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 20:48:27.644454  103439 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 20:48:27.644461  103439 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 20:48:27.644465  103439 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 20:48:27.644470  103439 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 20:48:27.644473  103439 command_runner.go:130] > # read_only = false
	I1002 20:48:27.644482  103439 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 20:48:27.644490  103439 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 20:48:27.644494  103439 command_runner.go:130] > # live configuration reload.
	I1002 20:48:27.644500  103439 command_runner.go:130] > # log_level = "info"
	I1002 20:48:27.644505  103439 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 20:48:27.644509  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.644512  103439 command_runner.go:130] > # log_filter = ""
	I1002 20:48:27.644518  103439 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 20:48:27.644525  103439 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 20:48:27.644529  103439 command_runner.go:130] > # separated by comma.
	I1002 20:48:27.644536  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644542  103439 command_runner.go:130] > # uid_mappings = ""
	I1002 20:48:27.644547  103439 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 20:48:27.644552  103439 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 20:48:27.644559  103439 command_runner.go:130] > # separated by comma.
	I1002 20:48:27.644573  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644579  103439 command_runner.go:130] > # gid_mappings = ""
	I1002 20:48:27.644585  103439 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 20:48:27.644591  103439 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:48:27.644598  103439 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:48:27.644606  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644611  103439 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 20:48:27.644617  103439 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 20:48:27.644625  103439 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:48:27.644631  103439 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:48:27.644640  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644644  103439 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 20:48:27.644652  103439 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 20:48:27.644657  103439 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 20:48:27.644665  103439 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 20:48:27.644668  103439 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 20:48:27.644673  103439 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 20:48:27.644679  103439 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 20:48:27.644686  103439 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 20:48:27.644690  103439 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 20:48:27.644693  103439 command_runner.go:130] > # drop_infra_ctr = true
	I1002 20:48:27.644699  103439 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 20:48:27.644706  103439 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 20:48:27.644712  103439 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 20:48:27.644718  103439 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 20:48:27.644726  103439 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 20:48:27.644733  103439 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 20:48:27.644752  103439 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 20:48:27.644764  103439 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 20:48:27.644769  103439 command_runner.go:130] > # shared_cpuset = ""
	I1002 20:48:27.644777  103439 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 20:48:27.644782  103439 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 20:48:27.644785  103439 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 20:48:27.644798  103439 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 20:48:27.644804  103439 command_runner.go:130] > # pinns_path = ""
	I1002 20:48:27.644810  103439 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 20:48:27.644817  103439 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 20:48:27.644821  103439 command_runner.go:130] > # enable_criu_support = true
	I1002 20:48:27.644826  103439 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 20:48:27.644831  103439 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 20:48:27.644837  103439 command_runner.go:130] > # enable_pod_events = false
	I1002 20:48:27.644842  103439 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 20:48:27.644849  103439 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 20:48:27.644853  103439 command_runner.go:130] > # default_runtime = "crun"
	I1002 20:48:27.644858  103439 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 20:48:27.644867  103439 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 20:48:27.644876  103439 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 20:48:27.644882  103439 command_runner.go:130] > # creation as a file is not desired either.
	I1002 20:48:27.644890  103439 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 20:48:27.644896  103439 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 20:48:27.644900  103439 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 20:48:27.644905  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644911  103439 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 20:48:27.644919  103439 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 20:48:27.644925  103439 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 20:48:27.644930  103439 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 20:48:27.644932  103439 command_runner.go:130] > #
	I1002 20:48:27.644937  103439 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 20:48:27.644943  103439 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 20:48:27.644947  103439 command_runner.go:130] > # runtime_type = "oci"
	I1002 20:48:27.644951  103439 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 20:48:27.644955  103439 command_runner.go:130] > # inherit_default_runtime = false
	I1002 20:48:27.644959  103439 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 20:48:27.644963  103439 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 20:48:27.644968  103439 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 20:48:27.644972  103439 command_runner.go:130] > # monitor_env = []
	I1002 20:48:27.644980  103439 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 20:48:27.644987  103439 command_runner.go:130] > # allowed_annotations = []
	I1002 20:48:27.644992  103439 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 20:48:27.644998  103439 command_runner.go:130] > # no_sync_log = false
	I1002 20:48:27.645001  103439 command_runner.go:130] > # default_annotations = {}
	I1002 20:48:27.645007  103439 command_runner.go:130] > # stream_websockets = false
	I1002 20:48:27.645011  103439 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:48:27.645086  103439 command_runner.go:130] > # Where:
	I1002 20:48:27.645099  103439 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 20:48:27.645104  103439 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 20:48:27.645110  103439 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 20:48:27.645115  103439 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 20:48:27.645119  103439 command_runner.go:130] > #   in $PATH.
	I1002 20:48:27.645124  103439 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 20:48:27.645131  103439 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 20:48:27.645137  103439 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 20:48:27.645142  103439 command_runner.go:130] > #   state.
	I1002 20:48:27.645148  103439 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 20:48:27.645156  103439 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 20:48:27.645161  103439 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 20:48:27.645173  103439 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 20:48:27.645180  103439 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 20:48:27.645186  103439 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 20:48:27.645191  103439 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 20:48:27.645197  103439 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 20:48:27.645205  103439 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 20:48:27.645216  103439 command_runner.go:130] > #   The currently recognized values are:
	I1002 20:48:27.645224  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 20:48:27.645231  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 20:48:27.645239  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 20:48:27.645245  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 20:48:27.645254  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 20:48:27.645259  103439 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 20:48:27.645276  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 20:48:27.645284  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 20:48:27.645296  103439 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 20:48:27.645301  103439 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 20:48:27.645309  103439 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 20:48:27.645320  103439 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 20:48:27.645327  103439 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 20:48:27.645333  103439 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 20:48:27.645341  103439 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 20:48:27.645348  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 20:48:27.645355  103439 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 20:48:27.645360  103439 command_runner.go:130] > #   deprecated option "conmon".
	I1002 20:48:27.645368  103439 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 20:48:27.645373  103439 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 20:48:27.645381  103439 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 20:48:27.645385  103439 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 20:48:27.645392  103439 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 20:48:27.645398  103439 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 20:48:27.645405  103439 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 20:48:27.645410  103439 command_runner.go:130] > #   conmon-rs by using:
	I1002 20:48:27.645417  103439 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 20:48:27.645426  103439 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 20:48:27.645433  103439 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 20:48:27.645441  103439 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 20:48:27.645446  103439 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 20:48:27.645454  103439 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 20:48:27.645461  103439 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 20:48:27.645468  103439 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 20:48:27.645475  103439 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 20:48:27.645484  103439 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 20:48:27.645490  103439 command_runner.go:130] > #   when a machine crash happens.
	I1002 20:48:27.645496  103439 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 20:48:27.645505  103439 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 20:48:27.645517  103439 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 20:48:27.645523  103439 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 20:48:27.645529  103439 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 20:48:27.645542  103439 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 20:48:27.645548  103439 command_runner.go:130] > #
	I1002 20:48:27.645552  103439 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 20:48:27.645555  103439 command_runner.go:130] > #
	I1002 20:48:27.645560  103439 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 20:48:27.645569  103439 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 20:48:27.645573  103439 command_runner.go:130] > #
	I1002 20:48:27.645578  103439 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 20:48:27.645586  103439 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 20:48:27.645589  103439 command_runner.go:130] > #
	I1002 20:48:27.645595  103439 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 20:48:27.645598  103439 command_runner.go:130] > # feature.
	I1002 20:48:27.645601  103439 command_runner.go:130] > #
	I1002 20:48:27.645606  103439 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 20:48:27.645615  103439 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 20:48:27.645622  103439 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 20:48:27.645627  103439 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 20:48:27.645635  103439 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 20:48:27.645637  103439 command_runner.go:130] > #
	I1002 20:48:27.645643  103439 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 20:48:27.645651  103439 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 20:48:27.645653  103439 command_runner.go:130] > #
	I1002 20:48:27.645662  103439 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 20:48:27.645672  103439 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 20:48:27.645676  103439 command_runner.go:130] > #
	I1002 20:48:27.645682  103439 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 20:48:27.645690  103439 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 20:48:27.645693  103439 command_runner.go:130] > # limitation.
	I1002 20:48:27.645697  103439 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 20:48:27.645701  103439 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 20:48:27.645709  103439 command_runner.go:130] > runtime_type = ""
	I1002 20:48:27.645715  103439 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 20:48:27.645725  103439 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:48:27.645731  103439 command_runner.go:130] > runtime_config_path = ""
	I1002 20:48:27.645746  103439 command_runner.go:130] > container_min_memory = ""
	I1002 20:48:27.645754  103439 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:48:27.645762  103439 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:48:27.645768  103439 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:48:27.645777  103439 command_runner.go:130] > allowed_annotations = [
	I1002 20:48:27.645783  103439 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 20:48:27.645788  103439 command_runner.go:130] > ]
	I1002 20:48:27.645792  103439 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:48:27.645796  103439 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 20:48:27.645803  103439 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 20:48:27.645807  103439 command_runner.go:130] > runtime_type = ""
	I1002 20:48:27.645811  103439 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 20:48:27.645815  103439 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:48:27.645818  103439 command_runner.go:130] > runtime_config_path = ""
	I1002 20:48:27.645822  103439 command_runner.go:130] > container_min_memory = ""
	I1002 20:48:27.645826  103439 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:48:27.645830  103439 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:48:27.645834  103439 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:48:27.645838  103439 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:48:27.645844  103439 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 20:48:27.645852  103439 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 20:48:27.645857  103439 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 20:48:27.645866  103439 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 20:48:27.645875  103439 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 20:48:27.645886  103439 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 20:48:27.645894  103439 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 20:48:27.645899  103439 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 20:48:27.645907  103439 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 20:48:27.645917  103439 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 20:48:27.645930  103439 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 20:48:27.645940  103439 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 20:48:27.645943  103439 command_runner.go:130] > # Example:
	I1002 20:48:27.645949  103439 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 20:48:27.645953  103439 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 20:48:27.645960  103439 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 20:48:27.645966  103439 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 20:48:27.645972  103439 command_runner.go:130] > # cpuset = "0-1"
	I1002 20:48:27.645975  103439 command_runner.go:130] > # cpushares = "5"
	I1002 20:48:27.645979  103439 command_runner.go:130] > # cpuquota = "1000"
	I1002 20:48:27.645982  103439 command_runner.go:130] > # cpuperiod = "100000"
	I1002 20:48:27.645986  103439 command_runner.go:130] > # cpulimit = "35"
	I1002 20:48:27.645989  103439 command_runner.go:130] > # Where:
	I1002 20:48:27.645993  103439 command_runner.go:130] > # The workload name is workload-type.
	I1002 20:48:27.646000  103439 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 20:48:27.646006  103439 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 20:48:27.646011  103439 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 20:48:27.646021  103439 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 20:48:27.646026  103439 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 20:48:27.646034  103439 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 20:48:27.646044  103439 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 20:48:27.646052  103439 command_runner.go:130] > # Default value is set to true
	I1002 20:48:27.646058  103439 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 20:48:27.646068  103439 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 20:48:27.646074  103439 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 20:48:27.646083  103439 command_runner.go:130] > # Default value is set to 'false'
	I1002 20:48:27.646092  103439 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 20:48:27.646104  103439 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 20:48:27.646118  103439 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 20:48:27.646127  103439 command_runner.go:130] > # timezone = ""
	I1002 20:48:27.646136  103439 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 20:48:27.646144  103439 command_runner.go:130] > #
	I1002 20:48:27.646158  103439 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 20:48:27.646179  103439 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 20:48:27.646188  103439 command_runner.go:130] > [crio.image]
	I1002 20:48:27.646201  103439 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 20:48:27.646209  103439 command_runner.go:130] > # default_transport = "docker://"
	I1002 20:48:27.646217  103439 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 20:48:27.646225  103439 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:48:27.646229  103439 command_runner.go:130] > # global_auth_file = ""
	I1002 20:48:27.646236  103439 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 20:48:27.646241  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.646248  103439 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.646254  103439 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 20:48:27.646260  103439 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:48:27.646265  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.646271  103439 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 20:48:27.646276  103439 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 20:48:27.646281  103439 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 20:48:27.646289  103439 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 20:48:27.646295  103439 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 20:48:27.646301  103439 command_runner.go:130] > # pause_command = "/pause"
	I1002 20:48:27.646306  103439 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 20:48:27.646316  103439 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 20:48:27.646323  103439 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 20:48:27.646329  103439 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 20:48:27.646336  103439 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 20:48:27.646342  103439 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 20:48:27.646345  103439 command_runner.go:130] > # pinned_images = [
	I1002 20:48:27.646348  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646354  103439 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 20:48:27.646362  103439 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 20:48:27.646368  103439 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 20:48:27.646376  103439 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 20:48:27.646381  103439 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 20:48:27.646386  103439 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 20:48:27.646399  103439 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 20:48:27.646411  103439 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 20:48:27.646423  103439 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 20:48:27.646436  103439 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 20:48:27.646447  103439 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 20:48:27.646458  103439 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 20:48:27.646470  103439 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 20:48:27.646480  103439 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 20:48:27.646486  103439 command_runner.go:130] > # changing them here.
	I1002 20:48:27.646491  103439 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 20:48:27.646497  103439 command_runner.go:130] > # insecure_registries = [
	I1002 20:48:27.646500  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646507  103439 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 20:48:27.646516  103439 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 20:48:27.646522  103439 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 20:48:27.646527  103439 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 20:48:27.646531  103439 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 20:48:27.646538  103439 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 20:48:27.646544  103439 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 20:48:27.646551  103439 command_runner.go:130] > # auto_reload_registries = false
	I1002 20:48:27.646557  103439 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 20:48:27.646571  103439 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 20:48:27.646579  103439 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 20:48:27.646583  103439 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 20:48:27.646590  103439 command_runner.go:130] > # The mode of short name resolution.
	I1002 20:48:27.646596  103439 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 20:48:27.646605  103439 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 20:48:27.646611  103439 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 20:48:27.646615  103439 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 20:48:27.646620  103439 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 20:48:27.646628  103439 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 20:48:27.646632  103439 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 20:48:27.646638  103439 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 20:48:27.646649  103439 command_runner.go:130] > # CNI plugins.
	I1002 20:48:27.646655  103439 command_runner.go:130] > [crio.network]
	I1002 20:48:27.646660  103439 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 20:48:27.646667  103439 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 20:48:27.646671  103439 command_runner.go:130] > # cni_default_network = ""
	I1002 20:48:27.646678  103439 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 20:48:27.646682  103439 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 20:48:27.646690  103439 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 20:48:27.646693  103439 command_runner.go:130] > # plugin_dirs = [
	I1002 20:48:27.646696  103439 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 20:48:27.646699  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646703  103439 command_runner.go:130] > # List of included pod metrics.
	I1002 20:48:27.646709  103439 command_runner.go:130] > # included_pod_metrics = [
	I1002 20:48:27.646711  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646716  103439 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 20:48:27.646722  103439 command_runner.go:130] > [crio.metrics]
	I1002 20:48:27.646726  103439 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 20:48:27.646732  103439 command_runner.go:130] > # enable_metrics = false
	I1002 20:48:27.646752  103439 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 20:48:27.646761  103439 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 20:48:27.646767  103439 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 20:48:27.646775  103439 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 20:48:27.646783  103439 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 20:48:27.646787  103439 command_runner.go:130] > # metrics_collectors = [
	I1002 20:48:27.646793  103439 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 20:48:27.646797  103439 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 20:48:27.646800  103439 command_runner.go:130] > # 	"containers_oom_total",
	I1002 20:48:27.646804  103439 command_runner.go:130] > # 	"processes_defunct",
	I1002 20:48:27.646807  103439 command_runner.go:130] > # 	"operations_total",
	I1002 20:48:27.646811  103439 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 20:48:27.646815  103439 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 20:48:27.646818  103439 command_runner.go:130] > # 	"operations_errors_total",
	I1002 20:48:27.646822  103439 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 20:48:27.646831  103439 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 20:48:27.646835  103439 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 20:48:27.646839  103439 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 20:48:27.646842  103439 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 20:48:27.646846  103439 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 20:48:27.646850  103439 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 20:48:27.646853  103439 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 20:48:27.646857  103439 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 20:48:27.646860  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646868  103439 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 20:48:27.646874  103439 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 20:48:27.646880  103439 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 20:48:27.646886  103439 command_runner.go:130] > # metrics_port = 9090
	I1002 20:48:27.646891  103439 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 20:48:27.646901  103439 command_runner.go:130] > # metrics_socket = ""
	I1002 20:48:27.646909  103439 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 20:48:27.646914  103439 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 20:48:27.646922  103439 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 20:48:27.646928  103439 command_runner.go:130] > # certificate on any modification event.
	I1002 20:48:27.646932  103439 command_runner.go:130] > # metrics_cert = ""
	I1002 20:48:27.646939  103439 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 20:48:27.646943  103439 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 20:48:27.646949  103439 command_runner.go:130] > # metrics_key = ""
	I1002 20:48:27.646954  103439 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 20:48:27.646960  103439 command_runner.go:130] > [crio.tracing]
	I1002 20:48:27.646966  103439 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 20:48:27.646971  103439 command_runner.go:130] > # enable_tracing = false
	I1002 20:48:27.646977  103439 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 20:48:27.646983  103439 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 20:48:27.646993  103439 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 20:48:27.646999  103439 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 20:48:27.647003  103439 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 20:48:27.647009  103439 command_runner.go:130] > [crio.nri]
	I1002 20:48:27.647017  103439 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 20:48:27.647023  103439 command_runner.go:130] > # enable_nri = true
	I1002 20:48:27.647032  103439 command_runner.go:130] > # NRI socket to listen on.
	I1002 20:48:27.647038  103439 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 20:48:27.647042  103439 command_runner.go:130] > # NRI plugin directory to use.
	I1002 20:48:27.647049  103439 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 20:48:27.647053  103439 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 20:48:27.647060  103439 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 20:48:27.647065  103439 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 20:48:27.647584  103439 command_runner.go:130] > # nri_disable_connections = false
	I1002 20:48:27.647654  103439 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 20:48:27.647663  103439 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 20:48:27.647672  103439 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 20:48:27.647686  103439 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 20:48:27.647693  103439 command_runner.go:130] > # NRI default validator configuration.
	I1002 20:48:27.647707  103439 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 20:48:27.647731  103439 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 20:48:27.647757  103439 command_runner.go:130] > # can be restricted/rejected:
	I1002 20:48:27.647770  103439 command_runner.go:130] > # - OCI hook injection
	I1002 20:48:27.647779  103439 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 20:48:27.647792  103439 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 20:48:27.647798  103439 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 20:48:27.647805  103439 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 20:48:27.647819  103439 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 20:48:27.647828  103439 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 20:48:27.647837  103439 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 20:48:27.647841  103439 command_runner.go:130] > #
	I1002 20:48:27.647853  103439 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 20:48:27.647859  103439 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 20:48:27.647866  103439 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 20:48:27.647883  103439 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 20:48:27.647891  103439 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 20:48:27.647898  103439 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 20:48:27.647906  103439 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 20:48:27.647916  103439 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 20:48:27.647921  103439 command_runner.go:130] > # ]
	I1002 20:48:27.647929  103439 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 20:48:27.647939  103439 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 20:48:27.647949  103439 command_runner.go:130] > [crio.stats]
	I1002 20:48:27.647958  103439 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 20:48:27.647966  103439 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 20:48:27.647973  103439 command_runner.go:130] > # stats_collection_period = 0
	I1002 20:48:27.647994  103439 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 20:48:27.648004  103439 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 20:48:27.648009  103439 command_runner.go:130] > # collection_period = 0
	I1002 20:48:27.648051  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627189517Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 20:48:27.648070  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627217069Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 20:48:27.648087  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627236914Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 20:48:27.648106  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627255188Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 20:48:27.648122  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.62731995Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.648141  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627489035Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 20:48:27.648161  103439 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 20:48:27.648318  103439 cni.go:84] Creating CNI manager for ""
	I1002 20:48:27.648331  103439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:48:27.648354  103439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:48:27.648401  103439 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-012915 NodeName:functional-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:48:27.648942  103439 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-012915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:48:27.649009  103439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:48:27.657181  103439 command_runner.go:130] > kubeadm
	I1002 20:48:27.657198  103439 command_runner.go:130] > kubectl
	I1002 20:48:27.657203  103439 command_runner.go:130] > kubelet
	I1002 20:48:27.657948  103439 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:48:27.658013  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:48:27.665603  103439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:48:27.678534  103439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:48:27.691111  103439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:48:27.703366  103439 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:48:27.707046  103439 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 20:48:27.707133  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:27.791376  103439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:48:27.804011  103439 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915 for IP: 192.168.49.2
	I1002 20:48:27.804040  103439 certs.go:195] generating shared ca certs ...
	I1002 20:48:27.804056  103439 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:27.804180  103439 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 20:48:27.804232  103439 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 20:48:27.804241  103439 certs.go:257] generating profile certs ...
	I1002 20:48:27.804334  103439 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key
	I1002 20:48:27.804375  103439 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645
	I1002 20:48:27.804412  103439 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key
	I1002 20:48:27.804424  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:48:27.804435  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:48:27.804453  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:48:27.804469  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:48:27.804481  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:48:27.804494  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:48:27.804506  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:48:27.804518  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:48:27.804560  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 20:48:27.804591  103439 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 20:48:27.804601  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:48:27.804623  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:48:27.804645  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:48:27.804666  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 20:48:27.804704  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:48:27.804729  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 20:48:27.804763  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:27.804780  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 20:48:27.805294  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:48:27.822974  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:48:27.840455  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:48:27.858368  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:48:27.877146  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:48:27.895282  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:48:27.912487  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:48:27.929452  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:48:27.947144  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 20:48:27.964177  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:48:27.981785  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 20:48:27.999006  103439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:48:28.011646  103439 ssh_runner.go:195] Run: openssl version
	I1002 20:48:28.017389  103439 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 20:48:28.017621  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 20:48:28.025902  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029403  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029446  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029489  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.063085  103439 command_runner.go:130] > 3ec20f2e
	I1002 20:48:28.063182  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:48:28.071431  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:48:28.080075  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083770  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083829  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083901  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.117894  103439 command_runner.go:130] > b5213941
	I1002 20:48:28.117982  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:48:28.126480  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 20:48:28.135075  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138711  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138759  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138809  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.172582  103439 command_runner.go:130] > 51391683
	I1002 20:48:28.172931  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 20:48:28.180914  103439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:48:28.184555  103439 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:48:28.184579  103439 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 20:48:28.184588  103439 command_runner.go:130] > Device: 8,1	Inode: 811435      Links: 1
	I1002 20:48:28.184598  103439 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:48:28.184608  103439 command_runner.go:130] > Access: 2025-10-02 20:44:21.070069799 +0000
	I1002 20:48:28.184616  103439 command_runner.go:130] > Modify: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184623  103439 command_runner.go:130] > Change: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184628  103439 command_runner.go:130] >  Birth: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184684  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:48:28.218476  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.218920  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:48:28.253813  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.254026  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:48:28.288477  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.288852  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:48:28.322969  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.323293  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:48:28.357073  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.357354  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:48:28.390854  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.391133  103439 kubeadm.go:400] StartCluster: {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:28.391217  103439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:48:28.391280  103439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:48:28.420217  103439 cri.go:89] found id: ""
	I1002 20:48:28.420280  103439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:48:28.427672  103439 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 20:48:28.427700  103439 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 20:48:28.427710  103439 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 20:48:28.428396  103439 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:48:28.428413  103439 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:48:28.428455  103439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:48:28.435936  103439 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:48:28.436039  103439 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-012915" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.436106  103439 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "functional-012915" cluster setting kubeconfig missing "functional-012915" context setting]
	I1002 20:48:28.436458  103439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.437072  103439 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.437245  103439 kapi.go:59] client config for functional-012915: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:48:28.437717  103439 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:48:28.437732  103439 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:48:28.437753  103439 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:48:28.437760  103439 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:48:28.437765  103439 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:48:28.437782  103439 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:48:28.438160  103439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:48:28.446094  103439 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:48:28.446137  103439 kubeadm.go:601] duration metric: took 17.717766ms to restartPrimaryControlPlane
	I1002 20:48:28.446149  103439 kubeadm.go:402] duration metric: took 55.025148ms to StartCluster
	I1002 20:48:28.446168  103439 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.446285  103439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.447035  103439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.447291  103439 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:48:28.447487  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:28.447429  103439 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:48:28.447531  103439 addons.go:69] Setting storage-provisioner=true in profile "functional-012915"
	I1002 20:48:28.447538  103439 addons.go:69] Setting default-storageclass=true in profile "functional-012915"
	I1002 20:48:28.447553  103439 addons.go:238] Setting addon storage-provisioner=true in "functional-012915"
	I1002 20:48:28.447556  103439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-012915"
	I1002 20:48:28.447587  103439 host.go:66] Checking if "functional-012915" exists ...
	I1002 20:48:28.447847  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.447963  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.456904  103439 out.go:179] * Verifying Kubernetes components...
	I1002 20:48:28.458283  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:28.468928  103439 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.469101  103439 kapi.go:59] client config for functional-012915: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:48:28.469369  103439 addons.go:238] Setting addon default-storageclass=true in "functional-012915"
	I1002 20:48:28.469428  103439 host.go:66] Checking if "functional-012915" exists ...
	I1002 20:48:28.469783  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.469862  103439 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:48:28.471474  103439 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:28.471499  103439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:48:28.471557  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:28.496201  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:28.497174  103439 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.497196  103439 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:48:28.497262  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:28.518487  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:28.562123  103439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:48:28.575162  103439 node_ready.go:35] waiting up to 6m0s for node "functional-012915" to be "Ready" ...
	I1002 20:48:28.575316  103439 type.go:168] "Request Body" body=""
	I1002 20:48:28.575388  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:28.575672  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:28.608117  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:28.625656  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.661232  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.663490  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.663556  103439 retry.go:31] will retry after 361.771557ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.679351  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.679399  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.679416  103439 retry.go:31] will retry after 152.242547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.831815  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.883542  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.883591  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.883623  103439 retry.go:31] will retry after 207.681653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.025956  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.075113  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.076262  103439 type.go:168] "Request Body" body=""
	I1002 20:48:29.076342  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:29.076623  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:29.077506  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.077533  103439 retry.go:31] will retry after 323.914971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.091861  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:29.140394  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.142831  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.142876  103439 retry.go:31] will retry after 594.351303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.402253  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.454867  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.454924  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.454957  103439 retry.go:31] will retry after 314.476021ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.576263  103439 type.go:168] "Request Body" body=""
	I1002 20:48:29.576411  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:29.576803  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:29.738004  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:29.769756  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.788694  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.790987  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.791025  103439 retry.go:31] will retry after 1.197724944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.822453  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.822502  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.822528  103439 retry.go:31] will retry after 662.931836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.075955  103439 type.go:168] "Request Body" body=""
	I1002 20:48:30.076032  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:30.076409  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:30.485957  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:30.538516  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:30.538557  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.538578  103439 retry.go:31] will retry after 1.629504367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.575804  103439 type.go:168] "Request Body" body=""
	I1002 20:48:30.575880  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:30.576213  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:30.576271  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:30.989890  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:31.043558  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:31.043619  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.043637  103439 retry.go:31] will retry after 801.444903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.075880  103439 type.go:168] "Request Body" body=""
	I1002 20:48:31.075960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:31.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:31.576114  103439 type.go:168] "Request Body" body=""
	I1002 20:48:31.576220  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:31.576603  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:31.845951  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:31.899339  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:31.899391  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.899410  103439 retry.go:31] will retry after 2.181457366s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.075827  103439 type.go:168] "Request Body" body=""
	I1002 20:48:32.075931  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:32.076334  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:32.168648  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:32.220495  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:32.220539  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.220557  103439 retry.go:31] will retry after 1.373851602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.576076  103439 type.go:168] "Request Body" body=""
	I1002 20:48:32.576161  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:32.576533  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:32.576599  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:33.076393  103439 type.go:168] "Request Body" body=""
	I1002 20:48:33.076488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:33.076861  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:33.575655  103439 type.go:168] "Request Body" body=""
	I1002 20:48:33.575875  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:33.576337  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:33.595591  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:33.646012  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:33.648297  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:33.648332  103439 retry.go:31] will retry after 3.090030694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.075896  103439 type.go:168] "Request Body" body=""
	I1002 20:48:34.075981  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:34.076263  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:34.081465  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:34.133647  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:34.133724  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.133770  103439 retry.go:31] will retry after 3.497111827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.576313  103439 type.go:168] "Request Body" body=""
	I1002 20:48:34.576409  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:34.576832  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:34.576893  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:35.075636  103439 type.go:168] "Request Body" body=""
	I1002 20:48:35.075732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:35.076135  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:35.575728  103439 type.go:168] "Request Body" body=""
	I1002 20:48:35.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:35.576239  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.076110  103439 type.go:168] "Request Body" body=""
	I1002 20:48:36.076196  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:36.076574  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.575482  103439 type.go:168] "Request Body" body=""
	I1002 20:48:36.575578  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:36.575974  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.739297  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:36.791716  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:36.791786  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:36.791808  103439 retry.go:31] will retry after 4.619526112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:37.076288  103439 type.go:168] "Request Body" body=""
	I1002 20:48:37.076368  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:37.076721  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:37.076814  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:37.576414  103439 type.go:168] "Request Body" body=""
	I1002 20:48:37.576492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:37.576867  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:37.632068  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:37.685537  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:37.685582  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:37.685612  103439 retry.go:31] will retry after 3.179037423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:38.076157  103439 type.go:168] "Request Body" body=""
	I1002 20:48:38.076230  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:38.076633  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:38.576327  103439 type.go:168] "Request Body" body=""
	I1002 20:48:38.576425  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:38.576797  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:39.075409  103439 type.go:168] "Request Body" body=""
	I1002 20:48:39.075492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:39.075858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:39.575455  103439 type.go:168] "Request Body" body=""
	I1002 20:48:39.575567  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:39.575934  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:39.576000  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:40.075790  103439 type.go:168] "Request Body" body=""
	I1002 20:48:40.075873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:40.076280  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:40.575900  103439 type.go:168] "Request Body" body=""
	I1002 20:48:40.575982  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:40.576339  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:40.865793  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:40.922102  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:40.922154  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:40.922173  103439 retry.go:31] will retry after 8.017978865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.075452  103439 type.go:168] "Request Body" body=""
	I1002 20:48:41.075541  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:41.075959  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:41.412402  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:41.462892  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:41.465283  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.465317  103439 retry.go:31] will retry after 6.722422885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.575519  103439 type.go:168] "Request Body" body=""
	I1002 20:48:41.575606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:41.575978  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:41.576042  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:42.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:48:42.075773  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:42.076256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:42.575731  103439 type.go:168] "Request Body" body=""
	I1002 20:48:42.575835  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:42.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:43.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:48:43.076025  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:43.076442  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:43.576156  103439 type.go:168] "Request Body" body=""
	I1002 20:48:43.576250  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:43.576635  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:43.576711  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:44.076306  103439 type.go:168] "Request Body" body=""
	I1002 20:48:44.076398  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:44.076835  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:44.575484  103439 type.go:168] "Request Body" body=""
	I1002 20:48:44.575566  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:44.575930  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:45.075679  103439 type.go:168] "Request Body" body=""
	I1002 20:48:45.075780  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:45.076197  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:45.575843  103439 type.go:168] "Request Body" body=""
	I1002 20:48:45.575922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:45.576287  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:46.075882  103439 type.go:168] "Request Body" body=""
	I1002 20:48:46.075956  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:46.076307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:46.076367  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:46.576093  103439 type.go:168] "Request Body" body=""
	I1002 20:48:46.576194  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:46.576549  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:47.076247  103439 type.go:168] "Request Body" body=""
	I1002 20:48:47.076328  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:47.076667  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:47.576364  103439 type.go:168] "Request Body" body=""
	I1002 20:48:47.576474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:47.576869  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:48.075470  103439 type.go:168] "Request Body" body=""
	I1002 20:48:48.075556  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:48.075935  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:48.188198  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:48.240819  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:48.240876  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.240960  103439 retry.go:31] will retry after 5.203774684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.575470  103439 type.go:168] "Request Body" body=""
	I1002 20:48:48.575548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:48.575916  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:48.575985  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:48.940390  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:48.992334  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:48.994935  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.994965  103439 retry.go:31] will retry after 7.700365391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:49.076327  103439 type.go:168] "Request Body" body=""
	I1002 20:48:49.076416  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:49.076830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:49.575454  103439 type.go:168] "Request Body" body=""
	I1002 20:48:49.575554  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:49.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:50.075711  103439 type.go:168] "Request Body" body=""
	I1002 20:48:50.075826  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:50.076249  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:50.575864  103439 type.go:168] "Request Body" body=""
	I1002 20:48:50.575961  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:50.576351  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:50.576415  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:51.076075  103439 type.go:168] "Request Body" body=""
	I1002 20:48:51.076176  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:51.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:51.575972  103439 type.go:168] "Request Body" body=""
	I1002 20:48:51.576054  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:51.576387  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:52.076055  103439 type.go:168] "Request Body" body=""
	I1002 20:48:52.076146  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:52.076526  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:52.576203  103439 type.go:168] "Request Body" body=""
	I1002 20:48:52.576289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:52.576688  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:52.576771  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:53.076363  103439 type.go:168] "Request Body" body=""
	I1002 20:48:53.076444  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:53.076831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:53.445247  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:53.496043  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:53.498518  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:53.498561  103439 retry.go:31] will retry after 18.668445084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:53.575895  103439 type.go:168] "Request Body" body=""
	I1002 20:48:53.575974  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:53.576330  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:54.076074  103439 type.go:168] "Request Body" body=""
	I1002 20:48:54.076158  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:54.076568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:54.576230  103439 type.go:168] "Request Body" body=""
	I1002 20:48:54.576305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:54.576631  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:55.075724  103439 type.go:168] "Request Body" body=""
	I1002 20:48:55.075820  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:55.076207  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:55.076287  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:55.575835  103439 type.go:168] "Request Body" body=""
	I1002 20:48:55.575924  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:55.576280  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.075883  103439 type.go:168] "Request Body" body=""
	I1002 20:48:56.075963  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:56.076361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.576037  103439 type.go:168] "Request Body" body=""
	I1002 20:48:56.576120  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:56.576513  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.695837  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:56.749495  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:56.749534  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:56.749553  103439 retry.go:31] will retry after 17.757887541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:57.076066  103439 type.go:168] "Request Body" body=""
	I1002 20:48:57.076153  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:57.076611  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:57.076679  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:57.576325  103439 type.go:168] "Request Body" body=""
	I1002 20:48:57.576416  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:57.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:58.076237  103439 type.go:168] "Request Body" body=""
	I1002 20:48:58.076314  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:58.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:58.575412  103439 type.go:168] "Request Body" body=""
	I1002 20:48:58.575504  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:58.575865  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:59.075437  103439 type.go:168] "Request Body" body=""
	I1002 20:48:59.075528  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:59.075976  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:59.575438  103439 type.go:168] "Request Body" body=""
	I1002 20:48:59.575539  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:59.575952  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:59.576014  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:00.075849  103439 type.go:168] "Request Body" body=""
	I1002 20:49:00.075928  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:00.076266  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:00.575974  103439 type.go:168] "Request Body" body=""
	I1002 20:49:00.576072  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:00.576461  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:01.076180  103439 type.go:168] "Request Body" body=""
	I1002 20:49:01.076280  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:01.076643  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:01.576370  103439 type.go:168] "Request Body" body=""
	I1002 20:49:01.576466  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:01.576896  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:01.576970  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:02.075515  103439 type.go:168] "Request Body" body=""
	I1002 20:49:02.075606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:02.075985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:02.575600  103439 type.go:168] "Request Body" body=""
	I1002 20:49:02.575686  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:02.576112  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:03.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:49:03.075769  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:03.076121  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:03.575712  103439 type.go:168] "Request Body" body=""
	I1002 20:49:03.575846  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:03.576202  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:04.075891  103439 type.go:168] "Request Body" body=""
	I1002 20:49:04.075970  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:04.076322  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:04.076381  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:04.576087  103439 type.go:168] "Request Body" body=""
	I1002 20:49:04.576249  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:04.576616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:05.075403  103439 type.go:168] "Request Body" body=""
	I1002 20:49:05.075481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:05.075839  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:05.575464  103439 type.go:168] "Request Body" body=""
	I1002 20:49:05.575572  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:05.575972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:06.075594  103439 type.go:168] "Request Body" body=""
	I1002 20:49:06.075677  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:06.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:06.575658  103439 type.go:168] "Request Body" body=""
	I1002 20:49:06.575767  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:06.576141  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:06.576200  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:07.075781  103439 type.go:168] "Request Body" body=""
	I1002 20:49:07.075865  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:07.076245  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:07.575885  103439 type.go:168] "Request Body" body=""
	I1002 20:49:07.575974  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:07.576361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:08.075998  103439 type.go:168] "Request Body" body=""
	I1002 20:49:08.076084  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:08.076429  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:08.576307  103439 type.go:168] "Request Body" body=""
	I1002 20:49:08.576413  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:08.576814  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:08.576876  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:09.075362  103439 type.go:168] "Request Body" body=""
	I1002 20:49:09.075437  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:09.075799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:09.575387  103439 type.go:168] "Request Body" body=""
	I1002 20:49:09.575482  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:09.575850  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:10.075783  103439 type.go:168] "Request Body" body=""
	I1002 20:49:10.075869  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:10.076249  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:10.575831  103439 type.go:168] "Request Body" body=""
	I1002 20:49:10.575935  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:10.576353  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:11.076044  103439 type.go:168] "Request Body" body=""
	I1002 20:49:11.076133  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:11.076599  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:11.076668  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:11.576237  103439 type.go:168] "Request Body" body=""
	I1002 20:49:11.576331  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:11.576683  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:12.076335  103439 type.go:168] "Request Body" body=""
	I1002 20:49:12.076430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:12.076838  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:12.168044  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:12.220925  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:12.220980  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:12.221004  103439 retry.go:31] will retry after 18.69466529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:12.575446  103439 type.go:168] "Request Body" body=""
	I1002 20:49:12.575535  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:12.575932  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:13.075529  103439 type.go:168] "Request Body" body=""
	I1002 20:49:13.075604  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:13.075957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:13.575562  103439 type.go:168] "Request Body" body=""
	I1002 20:49:13.575652  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:13.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:13.576135  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:14.075639  103439 type.go:168] "Request Body" body=""
	I1002 20:49:14.075761  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:14.076134  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:14.507714  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:49:14.560377  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:14.560441  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:14.560472  103439 retry.go:31] will retry after 29.222161527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:14.575630  103439 type.go:168] "Request Body" body=""
	I1002 20:49:14.575695  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:14.575976  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:15.075906  103439 type.go:168] "Request Body" body=""
	I1002 20:49:15.075982  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:15.076361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:15.575992  103439 type.go:168] "Request Body" body=""
	I1002 20:49:15.576071  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:15.576414  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:15.576474  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:16.076107  103439 type.go:168] "Request Body" body=""
	I1002 20:49:16.076212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:16.076649  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:16.576307  103439 type.go:168] "Request Body" body=""
	I1002 20:49:16.576391  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:16.576715  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:17.076322  103439 type.go:168] "Request Body" body=""
	I1002 20:49:17.076405  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:17.076824  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:17.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:49:17.575561  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:17.575924  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:18.076218  103439 type.go:168] "Request Body" body=""
	I1002 20:49:18.076306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:18.076654  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:18.076715  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:18.576306  103439 type.go:168] "Request Body" body=""
	I1002 20:49:18.576386  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:18.576768  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:19.075340  103439 type.go:168] "Request Body" body=""
	I1002 20:49:19.075428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:19.075806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:19.575441  103439 type.go:168] "Request Body" body=""
	I1002 20:49:19.575527  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:19.575944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:20.075821  103439 type.go:168] "Request Body" body=""
	I1002 20:49:20.075922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:20.076321  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:20.575880  103439 type.go:168] "Request Body" body=""
	I1002 20:49:20.575960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:20.576302  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:20.576377  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:21.075989  103439 type.go:168] "Request Body" body=""
	I1002 20:49:21.076074  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:21.076448  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:21.576110  103439 type.go:168] "Request Body" body=""
	I1002 20:49:21.576185  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:21.576542  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:22.076165  103439 type.go:168] "Request Body" body=""
	I1002 20:49:22.076244  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:22.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:22.576228  103439 type.go:168] "Request Body" body=""
	I1002 20:49:22.576309  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:22.576640  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:22.576699  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:23.076279  103439 type.go:168] "Request Body" body=""
	I1002 20:49:23.076364  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:23.076694  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:23.576332  103439 type.go:168] "Request Body" body=""
	I1002 20:49:23.576406  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:23.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:24.075380  103439 type.go:168] "Request Body" body=""
	I1002 20:49:24.075461  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:24.075821  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:24.575420  103439 type.go:168] "Request Body" body=""
	I1002 20:49:24.575507  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:24.575886  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:25.075625  103439 type.go:168] "Request Body" body=""
	I1002 20:49:25.075705  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:25.076135  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:25.076213  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:25.575710  103439 type.go:168] "Request Body" body=""
	I1002 20:49:25.575827  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:25.576189  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:26.075726  103439 type.go:168] "Request Body" body=""
	I1002 20:49:26.075816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:26.076175  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:26.575753  103439 type.go:168] "Request Body" body=""
	I1002 20:49:26.575829  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:26.576180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:27.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:49:27.075799  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:27.076197  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:27.076268  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:27.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:49:27.575897  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:27.576231  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:28.075845  103439 type.go:168] "Request Body" body=""
	I1002 20:49:28.075929  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:28.076311  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:28.576131  103439 type.go:168] "Request Body" body=""
	I1002 20:49:28.576205  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:28.576567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:29.076227  103439 type.go:168] "Request Body" body=""
	I1002 20:49:29.076317  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:29.076686  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:29.076777  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:29.576355  103439 type.go:168] "Request Body" body=""
	I1002 20:49:29.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:29.576786  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.075478  103439 type.go:168] "Request Body" body=""
	I1002 20:49:30.075569  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:30.075933  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.575478  103439 type.go:168] "Request Body" body=""
	I1002 20:49:30.575586  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:30.575938  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.916459  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:30.966432  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:30.968861  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:30.968901  103439 retry.go:31] will retry after 21.359119468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:31.076302  103439 type.go:168] "Request Body" body=""
	I1002 20:49:31.076392  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:31.076792  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:31.076872  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:31.575376  103439 type.go:168] "Request Body" body=""
	I1002 20:49:31.575450  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:31.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:32.075414  103439 type.go:168] "Request Body" body=""
	I1002 20:49:32.075517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:32.075902  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:32.575509  103439 type.go:168] "Request Body" body=""
	I1002 20:49:32.575602  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:32.575991  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:33.075769  103439 type.go:168] "Request Body" body=""
	I1002 20:49:33.075863  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:33.076201  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:33.576065  103439 type.go:168] "Request Body" body=""
	I1002 20:49:33.576159  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:33.576529  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:33.576605  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:34.076395  103439 type.go:168] "Request Body" body=""
	I1002 20:49:34.076474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:34.076849  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:34.575597  103439 type.go:168] "Request Body" body=""
	I1002 20:49:34.575671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:34.576060  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:35.075844  103439 type.go:168] "Request Body" body=""
	I1002 20:49:35.075929  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:35.076305  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:35.576145  103439 type.go:168] "Request Body" body=""
	I1002 20:49:35.576226  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:35.576568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:36.075334  103439 type.go:168] "Request Body" body=""
	I1002 20:49:36.075411  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:36.075806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:36.075863  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:36.575603  103439 type.go:168] "Request Body" body=""
	I1002 20:49:36.575675  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:36.576026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:37.075815  103439 type.go:168] "Request Body" body=""
	I1002 20:49:37.075895  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:37.076296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:37.576133  103439 type.go:168] "Request Body" body=""
	I1002 20:49:37.576211  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:37.576551  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:38.076393  103439 type.go:168] "Request Body" body=""
	I1002 20:49:38.076464  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:38.076847  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:38.076908  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:38.575667  103439 type.go:168] "Request Body" body=""
	I1002 20:49:38.575774  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:38.576122  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:39.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:49:39.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:39.076312  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:39.576198  103439 type.go:168] "Request Body" body=""
	I1002 20:49:39.576287  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:39.576659  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:40.075460  103439 type.go:168] "Request Body" body=""
	I1002 20:49:40.075544  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:40.075914  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:40.575679  103439 type.go:168] "Request Body" body=""
	I1002 20:49:40.575789  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:40.576134  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:40.576211  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:41.076023  103439 type.go:168] "Request Body" body=""
	I1002 20:49:41.076108  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:41.076444  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:41.576264  103439 type.go:168] "Request Body" body=""
	I1002 20:49:41.576340  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:41.576673  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:42.075461  103439 type.go:168] "Request Body" body=""
	I1002 20:49:42.075562  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:42.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:42.575679  103439 type.go:168] "Request Body" body=""
	I1002 20:49:42.575775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:42.576136  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:43.075963  103439 type.go:168] "Request Body" body=""
	I1002 20:49:43.076038  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:43.076375  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:43.076439  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:43.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:49:43.576333  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:43.576694  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:43.782991  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:49:43.835836  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:43.835901  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:43.835926  103439 retry.go:31] will retry after 22.850861202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:44.076251  103439 type.go:168] "Request Body" body=""
	I1002 20:49:44.076330  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:44.076662  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:44.576378  103439 type.go:168] "Request Body" body=""
	I1002 20:49:44.576459  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:44.576851  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:45.075622  103439 type.go:168] "Request Body" body=""
	I1002 20:49:45.075712  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:45.076088  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:45.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:49:45.575872  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:45.576194  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:45.576263  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:46.075799  103439 type.go:168] "Request Body" body=""
	I1002 20:49:46.075878  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:46.076248  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:46.576106  103439 type.go:168] "Request Body" body=""
	I1002 20:49:46.576212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:46.576565  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:47.075364  103439 type.go:168] "Request Body" body=""
	I1002 20:49:47.075444  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:47.075796  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:47.575534  103439 type.go:168] "Request Body" body=""
	I1002 20:49:47.575641  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:47.576000  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:48.075765  103439 type.go:168] "Request Body" body=""
	I1002 20:49:48.075841  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:48.076173  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:48.076233  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:48.576031  103439 type.go:168] "Request Body" body=""
	I1002 20:49:48.576136  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:48.576523  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:49.076388  103439 type.go:168] "Request Body" body=""
	I1002 20:49:49.076470  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:49.076836  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:49.575635  103439 type.go:168] "Request Body" body=""
	I1002 20:49:49.575728  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:49.576118  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:50.075933  103439 type.go:168] "Request Body" body=""
	I1002 20:49:50.076012  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:50.076363  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:50.076472  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:50.576327  103439 type.go:168] "Request Body" body=""
	I1002 20:49:50.576425  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:50.576803  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:51.075548  103439 type.go:168] "Request Body" body=""
	I1002 20:49:51.075627  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:51.075982  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:51.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:49:51.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:51.576150  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:52.075977  103439 type.go:168] "Request Body" body=""
	I1002 20:49:52.076055  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:52.076435  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:52.076515  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:52.328832  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:52.382480  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:52.382546  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:52.382704  103439 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:49:52.575971  103439 type.go:168] "Request Body" body=""
	I1002 20:49:52.576051  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:52.576411  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:53.076086  103439 type.go:168] "Request Body" body=""
	I1002 20:49:53.076192  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:53.076567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:53.576218  103439 type.go:168] "Request Body" body=""
	I1002 20:49:53.576298  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:53.576641  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:54.076333  103439 type.go:168] "Request Body" body=""
	I1002 20:49:54.076427  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:54.076837  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:54.076901  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:54.575348  103439 type.go:168] "Request Body" body=""
	I1002 20:49:54.575429  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:54.575793  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:55.075831  103439 type.go:168] "Request Body" body=""
	I1002 20:49:55.075927  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:55.076284  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:55.575878  103439 type.go:168] "Request Body" body=""
	I1002 20:49:55.575952  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:55.576307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:56.075954  103439 type.go:168] "Request Body" body=""
	I1002 20:49:56.076056  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:56.076429  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:56.576071  103439 type.go:168] "Request Body" body=""
	I1002 20:49:56.576174  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:56.576511  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:56.576569  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:57.076167  103439 type.go:168] "Request Body" body=""
	I1002 20:49:57.076292  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:57.076654  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:57.576317  103439 type.go:168] "Request Body" body=""
	I1002 20:49:57.576399  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:57.576791  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:58.075329  103439 type.go:168] "Request Body" body=""
	I1002 20:49:58.075426  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:58.075862  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:58.575784  103439 type.go:168] "Request Body" body=""
	I1002 20:49:58.575888  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:58.576288  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:59.075625  103439 type.go:168] "Request Body" body=""
	I1002 20:49:59.075696  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:59.076065  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:59.076136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:59.575793  103439 type.go:168] "Request Body" body=""
	I1002 20:49:59.575892  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:59.576323  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:00.076176  103439 type.go:168] "Request Body" body=""
	I1002 20:50:00.076256  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:00.076616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:00.575379  103439 type.go:168] "Request Body" body=""
	I1002 20:50:00.575456  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:00.575877  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:01.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:50:01.075760  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:01.076169  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:01.076232  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:01.576062  103439 type.go:168] "Request Body" body=""
	I1002 20:50:01.576155  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:01.576520  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:02.076405  103439 type.go:168] "Request Body" body=""
	I1002 20:50:02.076489  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:02.076943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:02.575716  103439 type.go:168] "Request Body" body=""
	I1002 20:50:02.575817  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:02.576177  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:03.076017  103439 type.go:168] "Request Body" body=""
	I1002 20:50:03.076108  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:03.076545  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:03.076613  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:03.575378  103439 type.go:168] "Request Body" body=""
	I1002 20:50:03.575465  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:03.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:04.075550  103439 type.go:168] "Request Body" body=""
	I1002 20:50:04.075623  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:04.076010  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:04.575808  103439 type.go:168] "Request Body" body=""
	I1002 20:50:04.575945  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:04.576301  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:05.076207  103439 type.go:168] "Request Body" body=""
	I1002 20:50:05.076281  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:05.076634  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:05.076700  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:05.575445  103439 type.go:168] "Request Body" body=""
	I1002 20:50:05.575527  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:05.575953  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.075700  103439 type.go:168] "Request Body" body=""
	I1002 20:50:06.075799  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:06.076172  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.575978  103439 type.go:168] "Request Body" body=""
	I1002 20:50:06.576053  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:06.576423  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.687689  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:50:06.737429  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:50:06.739791  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:50:06.739905  103439 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:50:06.742850  103439 out.go:179] * Enabled addons: 
	I1002 20:50:06.744531  103439 addons.go:514] duration metric: took 1m38.297120179s for enable addons: enabled=[]
	I1002 20:50:07.076348  103439 type.go:168] "Request Body" body=""
	I1002 20:50:07.076424  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:07.076810  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:07.076887  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:07.575585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:07.575664  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:07.576013  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:08.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:50:08.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:08.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:08.576074  103439 type.go:168] "Request Body" body=""
	I1002 20:50:08.576184  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:08.576885  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:09.075637  103439 type.go:168] "Request Body" body=""
	I1002 20:50:09.075726  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:09.076126  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:09.575856  103439 type.go:168] "Request Body" body=""
	I1002 20:50:09.575938  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:09.576289  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:09.576365  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:10.076213  103439 type.go:168] "Request Body" body=""
	I1002 20:50:10.076289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:10.076668  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:10.575384  103439 type.go:168] "Request Body" body=""
	I1002 20:50:10.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:10.575843  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:11.075634  103439 type.go:168] "Request Body" body=""
	I1002 20:50:11.075712  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:11.076109  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:11.575835  103439 type.go:168] "Request Body" body=""
	I1002 20:50:11.575921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:11.576276  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:12.076113  103439 type.go:168] "Request Body" body=""
	I1002 20:50:12.076186  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:12.076607  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:12.076677  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:12.575967  103439 type.go:168] "Request Body" body=""
	I1002 20:50:12.576054  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:12.576464  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:13.076341  103439 type.go:168] "Request Body" body=""
	I1002 20:50:13.076412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:13.076780  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:13.575533  103439 type.go:168] "Request Body" body=""
	I1002 20:50:13.575606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:13.576033  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:14.075814  103439 type.go:168] "Request Body" body=""
	I1002 20:50:14.075900  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:14.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:14.576194  103439 type.go:168] "Request Body" body=""
	I1002 20:50:14.576290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:14.576629  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:14.576695  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:15.075361  103439 type.go:168] "Request Body" body=""
	I1002 20:50:15.075442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:15.075840  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:15.575616  103439 type.go:168] "Request Body" body=""
	I1002 20:50:15.575700  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:15.576070  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:16.075838  103439 type.go:168] "Request Body" body=""
	I1002 20:50:16.075936  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:16.076365  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:16.576255  103439 type.go:168] "Request Body" body=""
	I1002 20:50:16.576335  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:16.576673  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:16.576732  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:17.075466  103439 type.go:168] "Request Body" body=""
	I1002 20:50:17.075545  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:17.075956  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:17.575727  103439 type.go:168] "Request Body" body=""
	I1002 20:50:17.575832  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:17.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:18.076032  103439 type.go:168] "Request Body" body=""
	I1002 20:50:18.076123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:18.076487  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:18.576201  103439 type.go:168] "Request Body" body=""
	I1002 20:50:18.576280  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:18.576630  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:19.075359  103439 type.go:168] "Request Body" body=""
	I1002 20:50:19.075436  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:19.075879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:19.075940  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:19.575662  103439 type.go:168] "Request Body" body=""
	I1002 20:50:19.575765  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:19.576112  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:20.075942  103439 type.go:168] "Request Body" body=""
	I1002 20:50:20.076022  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:20.076365  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:20.576167  103439 type.go:168] "Request Body" body=""
	I1002 20:50:20.576281  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:20.576638  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:21.075449  103439 type.go:168] "Request Body" body=""
	I1002 20:50:21.075533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:21.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:21.076012  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:21.575710  103439 type.go:168] "Request Body" body=""
	I1002 20:50:21.575816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:21.576163  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:22.076027  103439 type.go:168] "Request Body" body=""
	I1002 20:50:22.076112  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:22.076486  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:22.576328  103439 type.go:168] "Request Body" body=""
	I1002 20:50:22.576406  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:22.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:23.075575  103439 type.go:168] "Request Body" body=""
	I1002 20:50:23.075653  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:23.076015  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:23.076102  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:23.575919  103439 type.go:168] "Request Body" body=""
	I1002 20:50:23.576001  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:23.576441  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:24.076301  103439 type.go:168] "Request Body" body=""
	I1002 20:50:24.076385  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:24.076732  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:24.575497  103439 type.go:168] "Request Body" body=""
	I1002 20:50:24.575575  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:24.575977  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:25.075906  103439 type.go:168] "Request Body" body=""
	I1002 20:50:25.076002  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:25.076372  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:25.076430  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:25.575772  103439 type.go:168] "Request Body" body=""
	I1002 20:50:25.575847  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:25.576205  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:26.075989  103439 type.go:168] "Request Body" body=""
	I1002 20:50:26.076058  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:26.076440  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:26.576301  103439 type.go:168] "Request Body" body=""
	I1002 20:50:26.576389  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:26.576734  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:27.075548  103439 type.go:168] "Request Body" body=""
	I1002 20:50:27.075630  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:27.076087  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:27.575871  103439 type.go:168] "Request Body" body=""
	I1002 20:50:27.575960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:27.576295  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:27.576366  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:28.075983  103439 type.go:168] "Request Body" body=""
	I1002 20:50:28.076395  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:28.076839  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:28.575729  103439 type.go:168] "Request Body" body=""
	I1002 20:50:28.575838  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:28.576242  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:29.075826  103439 type.go:168] "Request Body" body=""
	I1002 20:50:29.075899  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:29.076269  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:29.576058  103439 type.go:168] "Request Body" body=""
	I1002 20:50:29.576161  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:29.576557  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:29.576620  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:30.075394  103439 type.go:168] "Request Body" body=""
	I1002 20:50:30.075476  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:30.075848  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:30.575440  103439 type.go:168] "Request Body" body=""
	I1002 20:50:30.575513  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:30.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:31.075504  103439 type.go:168] "Request Body" body=""
	I1002 20:50:31.075583  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:31.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:31.575533  103439 type.go:168] "Request Body" body=""
	I1002 20:50:31.575614  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:31.576035  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:32.075585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:32.075666  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:32.076026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:32.076094  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:32.575632  103439 type.go:168] "Request Body" body=""
	I1002 20:50:32.575709  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:32.576117  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:33.075652  103439 type.go:168] "Request Body" body=""
	I1002 20:50:33.075731  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:33.076100  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:33.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:50:33.575758  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:33.576149  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:34.075715  103439 type.go:168] "Request Body" body=""
	I1002 20:50:34.075810  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:34.076153  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:34.076216  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:34.575779  103439 type.go:168] "Request Body" body=""
	I1002 20:50:34.575858  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:34.576247  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:35.076148  103439 type.go:168] "Request Body" body=""
	I1002 20:50:35.076233  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:35.076598  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:35.576262  103439 type.go:168] "Request Body" body=""
	I1002 20:50:35.576347  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:35.576802  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:36.075374  103439 type.go:168] "Request Body" body=""
	I1002 20:50:36.075454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:36.075824  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:36.575422  103439 type.go:168] "Request Body" body=""
	I1002 20:50:36.575496  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:36.575848  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:36.575906  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:37.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:50:37.075521  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:37.075904  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:37.575460  103439 type.go:168] "Request Body" body=""
	I1002 20:50:37.575565  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:37.575952  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:38.075497  103439 type.go:168] "Request Body" body=""
	I1002 20:50:38.075579  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:38.075949  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:38.575843  103439 type.go:168] "Request Body" body=""
	I1002 20:50:38.575923  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:38.576292  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:38.576357  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:39.075970  103439 type.go:168] "Request Body" body=""
	I1002 20:50:39.076045  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:39.076459  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:39.576183  103439 type.go:168] "Request Body" body=""
	I1002 20:50:39.576276  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:39.576637  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:40.075394  103439 type.go:168] "Request Body" body=""
	I1002 20:50:40.075469  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:40.075856  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:40.575390  103439 type.go:168] "Request Body" body=""
	I1002 20:50:40.575465  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:40.575823  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:41.076191  103439 type.go:168] "Request Body" body=""
	I1002 20:50:41.076274  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:41.076628  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:41.076694  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:41.576291  103439 type.go:168] "Request Body" body=""
	I1002 20:50:41.576370  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:41.576770  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:42.076380  103439 type.go:168] "Request Body" body=""
	I1002 20:50:42.076481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:42.076834  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:42.575420  103439 type.go:168] "Request Body" body=""
	I1002 20:50:42.575496  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:42.575951  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:43.075513  103439 type.go:168] "Request Body" body=""
	I1002 20:50:43.075604  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:43.075967  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:43.575585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:43.575664  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:43.576070  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:43.576146  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:44.075681  103439 type.go:168] "Request Body" body=""
	I1002 20:50:44.075873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:44.076261  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:44.575868  103439 type.go:168] "Request Body" body=""
	I1002 20:50:44.575964  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:44.576327  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:45.076248  103439 type.go:168] "Request Body" body=""
	I1002 20:50:45.076357  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:45.076714  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:45.576035  103439 type.go:168] "Request Body" body=""
	I1002 20:50:45.576124  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:45.576501  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:45.576565  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:46.076153  103439 type.go:168] "Request Body" body=""
	I1002 20:50:46.076231  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:46.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:46.576261  103439 type.go:168] "Request Body" body=""
	I1002 20:50:46.576334  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:46.576706  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:47.076362  103439 type.go:168] "Request Body" body=""
	I1002 20:50:47.076446  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:47.076819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:47.575401  103439 type.go:168] "Request Body" body=""
	I1002 20:50:47.575474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:47.575854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:48.075429  103439 type.go:168] "Request Body" body=""
	I1002 20:50:48.075510  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:48.075856  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:48.075914  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:48.575411  103439 type.go:168] "Request Body" body=""
	I1002 20:50:48.575495  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:48.575887  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:49.075463  103439 type.go:168] "Request Body" body=""
	I1002 20:50:49.075543  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:49.075937  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:49.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:50:49.575579  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:49.575950  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:50.075789  103439 type.go:168] "Request Body" body=""
	I1002 20:50:50.075872  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:50.076231  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:50.076332  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:50.575815  103439 type.go:168] "Request Body" body=""
	I1002 20:50:50.575914  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:50.576296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:51.075877  103439 type.go:168] "Request Body" body=""
	I1002 20:50:51.075952  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:51.076337  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:51.576100  103439 type.go:168] "Request Body" body=""
	I1002 20:50:51.576202  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:51.576539  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:52.076187  103439 type.go:168] "Request Body" body=""
	I1002 20:50:52.076262  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:52.076592  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:52.076677  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:52.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:50:52.576403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:52.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:53.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:50:53.075460  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:53.075819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:53.575411  103439 type.go:168] "Request Body" body=""
	I1002 20:50:53.575520  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:53.575927  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:54.075511  103439 type.go:168] "Request Body" body=""
	I1002 20:50:54.075600  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:54.075971  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:54.575550  103439 type.go:168] "Request Body" body=""
	I1002 20:50:54.575643  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:54.576052  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:54.576136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:55.075833  103439 type.go:168] "Request Body" body=""
	I1002 20:50:55.075908  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:55.076313  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:55.575945  103439 type.go:168] "Request Body" body=""
	I1002 20:50:55.576033  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:55.576428  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:56.076124  103439 type.go:168] "Request Body" body=""
	I1002 20:50:56.076205  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:56.076588  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:56.576221  103439 type.go:168] "Request Body" body=""
	I1002 20:50:56.576325  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:56.576662  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:56.576724  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:57.076306  103439 type.go:168] "Request Body" body=""
	I1002 20:50:57.076386  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:57.076786  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:57.575325  103439 type.go:168] "Request Body" body=""
	I1002 20:50:57.575412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:57.575787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:58.076352  103439 type.go:168] "Request Body" body=""
	I1002 20:50:58.076422  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:58.076854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:58.575806  103439 type.go:168] "Request Body" body=""
	I1002 20:50:58.575901  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:58.576260  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:59.075853  103439 type.go:168] "Request Body" body=""
	I1002 20:50:59.075934  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:59.076321  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:59.076383  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:59.575967  103439 type.go:168] "Request Body" body=""
	I1002 20:50:59.576070  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:59.576437  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:00.076247  103439 type.go:168] "Request Body" body=""
	I1002 20:51:00.076327  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:00.076671  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:00.576348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:00.576435  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:00.576826  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:01.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:51:01.075456  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:01.075840  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:01.575383  103439 type.go:168] "Request Body" body=""
	I1002 20:51:01.575471  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:01.575834  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:01.575909  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:02.075405  103439 type.go:168] "Request Body" body=""
	I1002 20:51:02.075486  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:02.075854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:02.575427  103439 type.go:168] "Request Body" body=""
	I1002 20:51:02.575517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:02.575932  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:03.075458  103439 type.go:168] "Request Body" body=""
	I1002 20:51:03.075534  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:03.075891  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:03.576314  103439 type.go:168] "Request Body" body=""
	I1002 20:51:03.576387  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:03.576727  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:03.576806  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:04.076341  103439 type.go:168] "Request Body" body=""
	I1002 20:51:04.076414  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:04.076789  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:04.575407  103439 type.go:168] "Request Body" body=""
	I1002 20:51:04.575488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:04.575830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:05.075787  103439 type.go:168] "Request Body" body=""
	I1002 20:51:05.075860  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:05.076258  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:05.575847  103439 type.go:168] "Request Body" body=""
	I1002 20:51:05.575921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:05.576283  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:06.075890  103439 type.go:168] "Request Body" body=""
	I1002 20:51:06.075964  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:06.076395  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:06.076456  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:06.575993  103439 type.go:168] "Request Body" body=""
	I1002 20:51:06.576075  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:06.576412  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:07.076071  103439 type.go:168] "Request Body" body=""
	I1002 20:51:07.076154  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:07.076593  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:07.576229  103439 type.go:168] "Request Body" body=""
	I1002 20:51:07.576309  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:07.576657  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:08.076385  103439 type.go:168] "Request Body" body=""
	I1002 20:51:08.076464  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:08.076893  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:08.076954  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:08.575699  103439 type.go:168] "Request Body" body=""
	I1002 20:51:08.575787  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:08.576128  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:09.075675  103439 type.go:168] "Request Body" body=""
	I1002 20:51:09.075764  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:09.076126  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:09.576325  103439 type.go:168] "Request Body" body=""
	I1002 20:51:09.576432  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:09.576808  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:10.075645  103439 type.go:168] "Request Body" body=""
	I1002 20:51:10.075730  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:10.076142  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:10.575721  103439 type.go:168] "Request Body" body=""
	I1002 20:51:10.575820  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:10.576241  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:10.576304  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:11.075870  103439 type.go:168] "Request Body" body=""
	I1002 20:51:11.075955  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:11.076373  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:11.576041  103439 type.go:168] "Request Body" body=""
	I1002 20:51:11.576140  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:11.576505  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:12.076251  103439 type.go:168] "Request Body" body=""
	I1002 20:51:12.076345  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:12.076705  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:12.576352  103439 type.go:168] "Request Body" body=""
	I1002 20:51:12.576428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:12.576813  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:12.576892  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:13.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:51:13.075526  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:13.075917  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:13.575550  103439 type.go:168] "Request Body" body=""
	I1002 20:51:13.575640  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:13.576048  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:14.075644  103439 type.go:168] "Request Body" body=""
	I1002 20:51:14.075715  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:14.076108  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:14.575664  103439 type.go:168] "Request Body" body=""
	I1002 20:51:14.575795  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:14.576210  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:15.076065  103439 type.go:168] "Request Body" body=""
	I1002 20:51:15.076151  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:15.076548  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:15.076609  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:15.576209  103439 type.go:168] "Request Body" body=""
	I1002 20:51:15.576290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:15.576658  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:16.076387  103439 type.go:168] "Request Body" body=""
	I1002 20:51:16.076472  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:16.076818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:16.575432  103439 type.go:168] "Request Body" body=""
	I1002 20:51:16.575509  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:16.575925  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:17.075499  103439 type.go:168] "Request Body" body=""
	I1002 20:51:17.075588  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:17.075953  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:17.575636  103439 type.go:168] "Request Body" body=""
	I1002 20:51:17.575717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:17.576139  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:17.576206  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:18.075726  103439 type.go:168] "Request Body" body=""
	I1002 20:51:18.075840  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:18.076170  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:18.576043  103439 type.go:168] "Request Body" body=""
	I1002 20:51:18.576134  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:18.576500  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:19.076156  103439 type.go:168] "Request Body" body=""
	I1002 20:51:19.076230  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:19.076608  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:19.576287  103439 type.go:168] "Request Body" body=""
	I1002 20:51:19.576370  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:19.576719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:19.576823  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:20.075605  103439 type.go:168] "Request Body" body=""
	I1002 20:51:20.075689  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:20.076064  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:20.575671  103439 type.go:168] "Request Body" body=""
	I1002 20:51:20.575771  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:20.576160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:21.075760  103439 type.go:168] "Request Body" body=""
	I1002 20:51:21.075844  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:21.076251  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:21.575856  103439 type.go:168] "Request Body" body=""
	I1002 20:51:21.575946  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:21.576277  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:22.075938  103439 type.go:168] "Request Body" body=""
	I1002 20:51:22.076020  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:22.076385  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:22.076458  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:22.576058  103439 type.go:168] "Request Body" body=""
	I1002 20:51:22.576150  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:22.576496  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:23.076164  103439 type.go:168] "Request Body" body=""
	I1002 20:51:23.076256  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:23.076616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:23.576268  103439 type.go:168] "Request Body" body=""
	I1002 20:51:23.576350  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:23.576704  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:24.076361  103439 type.go:168] "Request Body" body=""
	I1002 20:51:24.076448  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:24.076818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:24.076882  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:24.575376  103439 type.go:168] "Request Body" body=""
	I1002 20:51:24.575452  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:24.575842  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:25.075817  103439 type.go:168] "Request Body" body=""
	I1002 20:51:25.075926  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:25.076324  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:25.575895  103439 type.go:168] "Request Body" body=""
	I1002 20:51:25.575977  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:25.576326  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:26.076018  103439 type.go:168] "Request Body" body=""
	I1002 20:51:26.076112  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:26.076484  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:26.576139  103439 type.go:168] "Request Body" body=""
	I1002 20:51:26.576216  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:26.576529  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:26.576601  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:27.076219  103439 type.go:168] "Request Body" body=""
	I1002 20:51:27.076333  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:27.076702  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:27.576348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:27.576421  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:27.576775  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:28.075392  103439 type.go:168] "Request Body" body=""
	I1002 20:51:28.075490  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:28.075928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:28.575733  103439 type.go:168] "Request Body" body=""
	I1002 20:51:28.575828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:28.576180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:29.075796  103439 type.go:168] "Request Body" body=""
	I1002 20:51:29.075881  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:29.076267  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:29.076325  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:29.575904  103439 type.go:168] "Request Body" body=""
	I1002 20:51:29.575995  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:29.576458  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:30.076348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:30.076430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:30.076826  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:30.575400  103439 type.go:168] "Request Body" body=""
	I1002 20:51:30.575481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:30.575844  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:31.075477  103439 type.go:168] "Request Body" body=""
	I1002 20:51:31.075558  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:31.076018  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:31.575552  103439 type.go:168] "Request Body" body=""
	I1002 20:51:31.575626  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:31.575957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:31.576019  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:32.075567  103439 type.go:168] "Request Body" body=""
	I1002 20:51:32.075648  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:32.076000  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:32.575617  103439 type.go:168] "Request Body" body=""
	I1002 20:51:32.575691  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:32.576091  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:33.075777  103439 type.go:168] "Request Body" body=""
	I1002 20:51:33.075867  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:33.076312  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:33.575892  103439 type.go:168] "Request Body" body=""
	I1002 20:51:33.575966  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:33.576360  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:33.576436  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:34.075990  103439 type.go:168] "Request Body" body=""
	I1002 20:51:34.076064  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:34.076423  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:34.576156  103439 type.go:168] "Request Body" body=""
	I1002 20:51:34.576242  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:34.576614  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:35.075451  103439 type.go:168] "Request Body" body=""
	I1002 20:51:35.075544  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:35.075944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:35.575553  103439 type.go:168] "Request Body" body=""
	I1002 20:51:35.575632  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:35.575984  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:36.075611  103439 type.go:168] "Request Body" body=""
	I1002 20:51:36.075690  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:36.076097  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:36.076170  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:36.575781  103439 type.go:168] "Request Body" body=""
	I1002 20:51:36.575857  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:36.576209  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:37.075787  103439 type.go:168] "Request Body" body=""
	I1002 20:51:37.075868  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:37.076233  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:37.575919  103439 type.go:168] "Request Body" body=""
	I1002 20:51:37.576016  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:37.576386  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:38.076037  103439 type.go:168] "Request Body" body=""
	I1002 20:51:38.076126  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:38.076506  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:38.076573  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:38.576216  103439 type.go:168] "Request Body" body=""
	I1002 20:51:38.576315  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:38.576715  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:39.076566  103439 type.go:168] "Request Body" body=""
	I1002 20:51:39.076671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:39.077118  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:39.575701  103439 type.go:168] "Request Body" body=""
	I1002 20:51:39.575832  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:39.576184  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:40.076137  103439 type.go:168] "Request Body" body=""
	I1002 20:51:40.076214  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:40.076550  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:40.076615  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:40.576291  103439 type.go:168] "Request Body" body=""
	I1002 20:51:40.576390  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:40.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:41.075322  103439 type.go:168] "Request Body" body=""
	I1002 20:51:41.075403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:41.075780  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:41.575391  103439 type.go:168] "Request Body" body=""
	I1002 20:51:41.575470  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:41.575870  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:42.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:51:42.075545  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:42.075943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:42.575565  103439 type.go:168] "Request Body" body=""
	I1002 20:51:42.575660  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:42.576053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:42.576127  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:43.075648  103439 type.go:168] "Request Body" body=""
	I1002 20:51:43.075718  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:43.076099  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:43.575699  103439 type.go:168] "Request Body" body=""
	I1002 20:51:43.575814  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:43.576217  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:44.075869  103439 type.go:168] "Request Body" body=""
	I1002 20:51:44.075942  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:44.076297  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:44.575859  103439 type.go:168] "Request Body" body=""
	I1002 20:51:44.575949  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:44.576319  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:44.576388  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:45.076331  103439 type.go:168] "Request Body" body=""
	I1002 20:51:45.076413  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:45.076728  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:45.575369  103439 type.go:168] "Request Body" body=""
	I1002 20:51:45.575463  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:45.575833  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:46.075482  103439 type.go:168] "Request Body" body=""
	I1002 20:51:46.075561  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:46.075954  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:46.575542  103439 type.go:168] "Request Body" body=""
	I1002 20:51:46.575624  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:46.575972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:47.075530  103439 type.go:168] "Request Body" body=""
	I1002 20:51:47.075605  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:47.076010  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:47.076101  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:47.575610  103439 type.go:168] "Request Body" body=""
	I1002 20:51:47.575685  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:47.576069  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:48.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:51:48.075809  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:48.076160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:48.576035  103439 type.go:168] "Request Body" body=""
	I1002 20:51:48.576123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:48.576499  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:49.076190  103439 type.go:168] "Request Body" body=""
	I1002 20:51:49.076263  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:49.076621  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:49.076681  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:49.576270  103439 type.go:168] "Request Body" body=""
	I1002 20:51:49.576351  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:49.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:50.075539  103439 type.go:168] "Request Body" body=""
	I1002 20:51:50.075624  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:50.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:50.575631  103439 type.go:168] "Request Body" body=""
	I1002 20:51:50.575707  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:50.576114  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:51.075711  103439 type.go:168] "Request Body" body=""
	I1002 20:51:51.075818  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:51.076157  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:51.575814  103439 type.go:168] "Request Body" body=""
	I1002 20:51:51.575890  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:51.576235  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:51.576316  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:52.075820  103439 type.go:168] "Request Body" body=""
	I1002 20:51:52.075911  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:52.076272  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:52.575858  103439 type.go:168] "Request Body" body=""
	I1002 20:51:52.575932  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:52.576284  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:53.075878  103439 type.go:168] "Request Body" body=""
	I1002 20:51:53.075963  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:53.076342  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:53.576038  103439 type.go:168] "Request Body" body=""
	I1002 20:51:53.576123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:53.576491  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:53.576559  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:54.076212  103439 type.go:168] "Request Body" body=""
	I1002 20:51:54.076289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:54.076627  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:54.576310  103439 type.go:168] "Request Body" body=""
	I1002 20:51:54.576389  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:54.576719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:55.075503  103439 type.go:168] "Request Body" body=""
	I1002 20:51:55.075581  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:55.075972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:55.575557  103439 type.go:168] "Request Body" body=""
	I1002 20:51:55.575642  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:55.576018  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:56.075601  103439 type.go:168] "Request Body" body=""
	I1002 20:51:56.075683  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:56.076064  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:56.076141  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:56.575721  103439 type.go:168] "Request Body" body=""
	I1002 20:51:56.575815  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:56.576144  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:57.075712  103439 type.go:168] "Request Body" body=""
	I1002 20:51:57.075821  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:57.076181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:57.575767  103439 type.go:168] "Request Body" body=""
	I1002 20:51:57.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:57.576216  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:58.075841  103439 type.go:168] "Request Body" body=""
	I1002 20:51:58.075920  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:58.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:58.076367  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:58.576187  103439 type.go:168] "Request Body" body=""
	I1002 20:51:58.576265  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:58.576613  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:59.076311  103439 type.go:168] "Request Body" body=""
	I1002 20:51:59.076391  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:59.076790  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:59.576375  103439 type.go:168] "Request Body" body=""
	I1002 20:51:59.576454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:59.576812  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:00.075544  103439 type.go:168] "Request Body" body=""
	I1002 20:52:00.075629  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:00.075981  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:00.575537  103439 type.go:168] "Request Body" body=""
	I1002 20:52:00.575633  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:00.576003  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:00.576089  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:01.075618  103439 type.go:168] "Request Body" body=""
	I1002 20:52:01.075698  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:01.076058  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:01.575676  103439 type.go:168] "Request Body" body=""
	I1002 20:52:01.575782  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:01.576133  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:02.075714  103439 type.go:168] "Request Body" body=""
	I1002 20:52:02.075815  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:02.076186  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:02.575783  103439 type.go:168] "Request Body" body=""
	I1002 20:52:02.575871  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:02.576224  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:02.576299  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:03.075796  103439 type.go:168] "Request Body" body=""
	I1002 20:52:03.075881  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:03.076235  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:03.575826  103439 type.go:168] "Request Body" body=""
	I1002 20:52:03.575903  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:03.576282  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:04.075959  103439 type.go:168] "Request Body" body=""
	I1002 20:52:04.076039  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:04.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:04.576109  103439 type.go:168] "Request Body" body=""
	I1002 20:52:04.576183  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:04.576520  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:04.576584  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:05.075455  103439 type.go:168] "Request Body" body=""
	I1002 20:52:05.075532  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:05.075890  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:05.575433  103439 type.go:168] "Request Body" body=""
	I1002 20:52:05.575505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:05.575871  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:06.075440  103439 type.go:168] "Request Body" body=""
	I1002 20:52:06.075523  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:06.075827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:06.575497  103439 type.go:168] "Request Body" body=""
	I1002 20:52:06.575590  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:06.576026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:07.075591  103439 type.go:168] "Request Body" body=""
	I1002 20:52:07.075672  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:07.076053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:07.076126  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:07.575663  103439 type.go:168] "Request Body" body=""
	I1002 20:52:07.575766  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:07.576128  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:08.075654  103439 type.go:168] "Request Body" body=""
	I1002 20:52:08.075729  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:08.076096  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:08.575925  103439 type.go:168] "Request Body" body=""
	I1002 20:52:08.576003  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:08.576346  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:09.076056  103439 type.go:168] "Request Body" body=""
	I1002 20:52:09.076147  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:09.076530  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:09.076595  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:09.576165  103439 type.go:168] "Request Body" body=""
	I1002 20:52:09.576244  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:09.576584  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:10.075437  103439 type.go:168] "Request Body" body=""
	I1002 20:52:10.075510  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:10.075873  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:10.575468  103439 type.go:168] "Request Body" body=""
	I1002 20:52:10.575558  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:10.575906  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:11.075492  103439 type.go:168] "Request Body" body=""
	I1002 20:52:11.075568  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:11.075940  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:11.575529  103439 type.go:168] "Request Body" body=""
	I1002 20:52:11.575621  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:11.575986  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:11.576046  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:12.075605  103439 type.go:168] "Request Body" body=""
	I1002 20:52:12.075682  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:12.076073  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:12.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:52:12.575763  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:12.576125  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:13.075722  103439 type.go:168] "Request Body" body=""
	I1002 20:52:13.075828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:13.076171  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:13.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:52:13.575836  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:13.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:13.576254  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:14.075831  103439 type.go:168] "Request Body" body=""
	I1002 20:52:14.075921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:14.076324  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:14.575966  103439 type.go:168] "Request Body" body=""
	I1002 20:52:14.576045  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:14.576396  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:15.076397  103439 type.go:168] "Request Body" body=""
	I1002 20:52:15.076484  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:15.076845  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:15.575989  103439 type.go:168] "Request Body" body=""
	I1002 20:52:15.576066  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:15.576461  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:15.576526  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:16.076140  103439 type.go:168] "Request Body" body=""
	I1002 20:52:16.076235  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:16.076620  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:16.576345  103439 type.go:168] "Request Body" body=""
	I1002 20:52:16.576420  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:16.576818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:17.075412  103439 type.go:168] "Request Body" body=""
	I1002 20:52:17.075504  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:17.075868  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:17.575510  103439 type.go:168] "Request Body" body=""
	I1002 20:52:17.575592  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:17.575975  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:18.075585  103439 type.go:168] "Request Body" body=""
	I1002 20:52:18.075665  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:18.076061  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:18.076136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:18.575985  103439 type.go:168] "Request Body" body=""
	I1002 20:52:18.576059  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:18.576415  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:19.076058  103439 type.go:168] "Request Body" body=""
	I1002 20:52:19.076159  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:19.076526  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:19.576216  103439 type.go:168] "Request Body" body=""
	I1002 20:52:19.576306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:19.576656  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:20.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:20.075668  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:20.076037  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:20.575615  103439 type.go:168] "Request Body" body=""
	I1002 20:52:20.575692  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:20.576056  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:20.576123  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:21.075653  103439 type.go:168] "Request Body" body=""
	I1002 20:52:21.075760  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:21.076104  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:21.575691  103439 type.go:168] "Request Body" body=""
	I1002 20:52:21.575787  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:21.576159  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:22.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:52:22.075808  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:22.076168  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:22.575725  103439 type.go:168] "Request Body" body=""
	I1002 20:52:22.575823  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:22.576174  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:22.576239  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:23.075794  103439 type.go:168] "Request Body" body=""
	I1002 20:52:23.075868  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:23.076225  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:23.575463  103439 type.go:168] "Request Body" body=""
	I1002 20:52:23.575550  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:23.575980  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:24.075592  103439 type.go:168] "Request Body" body=""
	I1002 20:52:24.075681  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:24.076032  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:24.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:52:24.575768  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:24.576132  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:25.075932  103439 type.go:168] "Request Body" body=""
	I1002 20:52:25.076017  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:25.076379  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:25.076450  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:25.576068  103439 type.go:168] "Request Body" body=""
	I1002 20:52:25.576165  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:25.576567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:26.076267  103439 type.go:168] "Request Body" body=""
	I1002 20:52:26.076346  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:26.076713  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:26.576395  103439 type.go:168] "Request Body" body=""
	I1002 20:52:26.576472  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:26.576858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:27.075411  103439 type.go:168] "Request Body" body=""
	I1002 20:52:27.075491  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:27.075850  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:27.575491  103439 type.go:168] "Request Body" body=""
	I1002 20:52:27.575573  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:27.575964  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:27.576028  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:28.075504  103439 type.go:168] "Request Body" body=""
	I1002 20:52:28.075596  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:28.075950  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:28.575839  103439 type.go:168] "Request Body" body=""
	I1002 20:52:28.576029  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:28.576476  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:29.075757  103439 type.go:168] "Request Body" body=""
	I1002 20:52:29.075848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:29.076242  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:29.575836  103439 type.go:168] "Request Body" body=""
	I1002 20:52:29.575917  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:29.576348  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:29.576430  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:30.076283  103439 type.go:168] "Request Body" body=""
	I1002 20:52:30.076376  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:30.076774  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:30.575345  103439 type.go:168] "Request Body" body=""
	I1002 20:52:30.575422  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:30.575772  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:31.075417  103439 type.go:168] "Request Body" body=""
	I1002 20:52:31.075490  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:31.075917  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:31.575405  103439 type.go:168] "Request Body" body=""
	I1002 20:52:31.575482  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:31.575879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:32.075416  103439 type.go:168] "Request Body" body=""
	I1002 20:52:32.075492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:32.075830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:32.075891  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:32.575384  103439 type.go:168] "Request Body" body=""
	I1002 20:52:32.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:32.575860  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:33.075424  103439 type.go:168] "Request Body" body=""
	I1002 20:52:33.075505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:33.075919  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:33.575575  103439 type.go:168] "Request Body" body=""
	I1002 20:52:33.575659  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:33.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:34.075603  103439 type.go:168] "Request Body" body=""
	I1002 20:52:34.075689  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:34.076059  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:34.076133  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:34.575643  103439 type.go:168] "Request Body" body=""
	I1002 20:52:34.575717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:34.576097  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:35.075919  103439 type.go:168] "Request Body" body=""
	I1002 20:52:35.076001  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:35.076401  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:35.576097  103439 type.go:168] "Request Body" body=""
	I1002 20:52:35.576190  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:35.576569  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:36.076242  103439 type.go:168] "Request Body" body=""
	I1002 20:52:36.076321  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:36.076684  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:36.076771  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:36.576350  103439 type.go:168] "Request Body" body=""
	I1002 20:52:36.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:36.576806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:37.075371  103439 type.go:168] "Request Body" body=""
	I1002 20:52:37.075445  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:37.075830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:37.575379  103439 type.go:168] "Request Body" body=""
	I1002 20:52:37.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:37.575827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:38.075420  103439 type.go:168] "Request Body" body=""
	I1002 20:52:38.075494  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:38.075864  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:38.575408  103439 type.go:168] "Request Body" body=""
	I1002 20:52:38.575505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:38.575831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:38.575904  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:39.075468  103439 type.go:168] "Request Body" body=""
	I1002 20:52:39.075555  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:39.075908  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:39.575486  103439 type.go:168] "Request Body" body=""
	I1002 20:52:39.575564  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:39.575943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:40.075840  103439 type.go:168] "Request Body" body=""
	I1002 20:52:40.075937  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:40.076335  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:40.576013  103439 type.go:168] "Request Body" body=""
	I1002 20:52:40.576104  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:40.576440  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:40.576500  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:41.076194  103439 type.go:168] "Request Body" body=""
	I1002 20:52:41.076306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:41.076712  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:41.575323  103439 type.go:168] "Request Body" body=""
	I1002 20:52:41.575412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:41.575799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:42.075383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:42.075484  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:42.075843  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:42.575392  103439 type.go:168] "Request Body" body=""
	I1002 20:52:42.575469  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:42.575828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:43.075519  103439 type.go:168] "Request Body" body=""
	I1002 20:52:43.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:43.076045  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:43.076121  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:43.575640  103439 type.go:168] "Request Body" body=""
	I1002 20:52:43.575711  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:43.576105  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:44.075717  103439 type.go:168] "Request Body" body=""
	I1002 20:52:44.075847  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:44.076211  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:44.575828  103439 type.go:168] "Request Body" body=""
	I1002 20:52:44.575911  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:44.576256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:45.076131  103439 type.go:168] "Request Body" body=""
	I1002 20:52:45.076212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:45.076558  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:45.076640  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:45.576225  103439 type.go:168] "Request Body" body=""
	I1002 20:52:45.576305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:45.576652  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:46.076299  103439 type.go:168] "Request Body" body=""
	I1002 20:52:46.076380  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:46.076766  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:46.575344  103439 type.go:168] "Request Body" body=""
	I1002 20:52:46.575417  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:46.575789  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:47.075373  103439 type.go:168] "Request Body" body=""
	I1002 20:52:47.075452  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:47.075833  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:47.575383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:47.575467  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:47.575823  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:47.575904  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:48.075383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:48.075461  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:48.075828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:48.575654  103439 type.go:168] "Request Body" body=""
	I1002 20:52:48.575753  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:48.576167  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:49.075788  103439 type.go:168] "Request Body" body=""
	I1002 20:52:49.075878  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:49.076256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:49.575841  103439 type.go:168] "Request Body" body=""
	I1002 20:52:49.575931  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:49.576281  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:49.576341  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:50.076152  103439 type.go:168] "Request Body" body=""
	I1002 20:52:50.076231  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:50.076577  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:50.576298  103439 type.go:168] "Request Body" body=""
	I1002 20:52:50.576372  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:50.576726  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:51.075356  103439 type.go:168] "Request Body" body=""
	I1002 20:52:51.075442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:51.075828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:51.575458  103439 type.go:168] "Request Body" body=""
	I1002 20:52:51.575551  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:51.575985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:52.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:52.075659  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:52.076041  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:52.076130  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:52.575624  103439 type.go:168] "Request Body" body=""
	I1002 20:52:52.575701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:52.576057  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:53.075653  103439 type.go:168] "Request Body" body=""
	I1002 20:52:53.075728  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:53.076123  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:53.575676  103439 type.go:168] "Request Body" body=""
	I1002 20:52:53.575779  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:53.576133  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:54.075709  103439 type.go:168] "Request Body" body=""
	I1002 20:52:54.075829  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:54.076213  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:54.076292  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:54.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:52:54.575875  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:54.576247  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:55.076140  103439 type.go:168] "Request Body" body=""
	I1002 20:52:55.076229  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:55.076568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:55.576341  103439 type.go:168] "Request Body" body=""
	I1002 20:52:55.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:55.576817  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:56.075357  103439 type.go:168] "Request Body" body=""
	I1002 20:52:56.075448  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:56.075831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:56.575413  103439 type.go:168] "Request Body" body=""
	I1002 20:52:56.575503  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:56.575861  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:56.575933  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:57.075427  103439 type.go:168] "Request Body" body=""
	I1002 20:52:57.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:57.076006  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:57.575579  103439 type.go:168] "Request Body" body=""
	I1002 20:52:57.575653  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:57.576016  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:58.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:58.075671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:58.076062  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:58.575986  103439 type.go:168] "Request Body" body=""
	I1002 20:52:58.576070  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:58.576405  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:58.576463  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:59.076072  103439 type.go:168] "Request Body" body=""
	I1002 20:52:59.076176  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:59.076539  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:59.576174  103439 type.go:168] "Request Body" body=""
	I1002 20:52:59.576247  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:59.576606  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:00.075451  103439 type.go:168] "Request Body" body=""
	I1002 20:53:00.075535  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:00.075944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:00.575527  103439 type.go:168] "Request Body" body=""
	I1002 20:53:00.575613  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:00.576021  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:01.075639  103439 type.go:168] "Request Body" body=""
	I1002 20:53:01.075720  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:01.076158  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:01.076236  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:01.575757  103439 type.go:168] "Request Body" body=""
	I1002 20:53:01.575840  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:01.576224  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:02.075855  103439 type.go:168] "Request Body" body=""
	I1002 20:53:02.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:02.076346  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:02.576050  103439 type.go:168] "Request Body" body=""
	I1002 20:53:02.576149  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:02.576502  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:03.076160  103439 type.go:168] "Request Body" body=""
	I1002 20:53:03.076234  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:03.076597  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:03.076676  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:03.575963  103439 type.go:168] "Request Body" body=""
	I1002 20:53:03.576036  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:03.576386  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:04.076077  103439 type.go:168] "Request Body" body=""
	I1002 20:53:04.076167  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:04.076509  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:04.576256  103439 type.go:168] "Request Body" body=""
	I1002 20:53:04.576341  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:04.576710  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:05.075500  103439 type.go:168] "Request Body" body=""
	I1002 20:53:05.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:05.076015  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:05.575620  103439 type.go:168] "Request Body" body=""
	I1002 20:53:05.575699  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:05.576053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:05.576126  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:06.075659  103439 type.go:168] "Request Body" body=""
	I1002 20:53:06.075778  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:06.076160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:06.575713  103439 type.go:168] "Request Body" body=""
	I1002 20:53:06.575808  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:06.576161  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:07.075791  103439 type.go:168] "Request Body" body=""
	I1002 20:53:07.075896  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:07.076278  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:07.575857  103439 type.go:168] "Request Body" body=""
	I1002 20:53:07.575932  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:07.576289  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:07.576361  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:08.075859  103439 type.go:168] "Request Body" body=""
	I1002 20:53:08.075955  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:08.076329  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:08.576047  103439 type.go:168] "Request Body" body=""
	I1002 20:53:08.576136  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:08.576492  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:09.076119  103439 type.go:168] "Request Body" body=""
	I1002 20:53:09.076215  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:09.076582  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:09.576306  103439 type.go:168] "Request Body" body=""
	I1002 20:53:09.576382  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:09.576707  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:09.576802  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:10.075438  103439 type.go:168] "Request Body" body=""
	I1002 20:53:10.075516  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:10.075948  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:10.575530  103439 type.go:168] "Request Body" body=""
	I1002 20:53:10.575609  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:10.575983  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:11.075661  103439 type.go:168] "Request Body" body=""
	I1002 20:53:11.075769  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:11.076130  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:11.575757  103439 type.go:168] "Request Body" body=""
	I1002 20:53:11.575830  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:11.576189  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:12.075811  103439 type.go:168] "Request Body" body=""
	I1002 20:53:12.075891  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:12.076252  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:12.076323  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:12.575823  103439 type.go:168] "Request Body" body=""
	I1002 20:53:12.575896  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:12.576250  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:13.075897  103439 type.go:168] "Request Body" body=""
	I1002 20:53:13.075987  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:13.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:13.576059  103439 type.go:168] "Request Body" body=""
	I1002 20:53:13.576149  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:13.576497  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:14.076230  103439 type.go:168] "Request Body" body=""
	I1002 20:53:14.076305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:14.076648  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:14.076724  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:14.576300  103439 type.go:168] "Request Body" body=""
	I1002 20:53:14.576375  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:14.576711  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:15.075457  103439 type.go:168] "Request Body" body=""
	I1002 20:53:15.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:15.075942  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:15.575476  103439 type.go:168] "Request Body" body=""
	I1002 20:53:15.575564  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:15.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:16.075498  103439 type.go:168] "Request Body" body=""
	I1002 20:53:16.075597  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:16.075974  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:16.575530  103439 type.go:168] "Request Body" body=""
	I1002 20:53:16.575607  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:16.575990  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:16.576057  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:17.075599  103439 type.go:168] "Request Body" body=""
	I1002 20:53:17.075683  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:17.076066  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:17.575633  103439 type.go:168] "Request Body" body=""
	I1002 20:53:17.575706  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:17.576088  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:18.075675  103439 type.go:168] "Request Body" body=""
	I1002 20:53:18.075775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:18.076143  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:18.575997  103439 type.go:168] "Request Body" body=""
	I1002 20:53:18.576068  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:18.576432  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:18.576492  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:19.076147  103439 type.go:168] "Request Body" body=""
	I1002 20:53:19.076228  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:19.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:19.576248  103439 type.go:168] "Request Body" body=""
	I1002 20:53:19.576332  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:19.576675  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:20.075447  103439 type.go:168] "Request Body" body=""
	I1002 20:53:20.075529  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:20.075898  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:20.575465  103439 type.go:168] "Request Body" body=""
	I1002 20:53:20.575538  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:20.575923  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:21.075521  103439 type.go:168] "Request Body" body=""
	I1002 20:53:21.075619  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:21.075978  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:21.076044  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:21.575665  103439 type.go:168] "Request Body" body=""
	I1002 20:53:21.575775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:21.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:22.075717  103439 type.go:168] "Request Body" body=""
	I1002 20:53:22.075828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:22.076183  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:22.575808  103439 type.go:168] "Request Body" body=""
	I1002 20:53:22.575897  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:22.576256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:23.075928  103439 type.go:168] "Request Body" body=""
	I1002 20:53:23.076009  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:23.076405  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:23.076478  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:23.576093  103439 type.go:168] "Request Body" body=""
	I1002 20:53:23.576168  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:23.576558  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:24.076203  103439 type.go:168] "Request Body" body=""
	I1002 20:53:24.076290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:24.076643  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:24.576321  103439 type.go:168] "Request Body" body=""
	I1002 20:53:24.576404  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:24.576814  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:25.075708  103439 type.go:168] "Request Body" body=""
	I1002 20:53:25.075822  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:25.076180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:25.575791  103439 type.go:168] "Request Body" body=""
	I1002 20:53:25.575873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:25.576263  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:25.576328  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:26.075894  103439 type.go:168] "Request Body" body=""
	I1002 20:53:26.075978  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:26.076323  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:26.576003  103439 type.go:168] "Request Body" body=""
	I1002 20:53:26.576076  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:26.576445  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:27.076142  103439 type.go:168] "Request Body" body=""
	I1002 20:53:27.076232  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:27.076600  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:27.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:53:27.576332  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:27.576701  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:27.576806  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:28.076370  103439 type.go:168] "Request Body" body=""
	I1002 20:53:28.076473  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:28.076858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:28.575697  103439 type.go:168] "Request Body" body=""
	I1002 20:53:28.575806  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:28.576163  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:29.075772  103439 type.go:168] "Request Body" body=""
	I1002 20:53:29.075851  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:29.076254  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:29.575812  103439 type.go:168] "Request Body" body=""
	I1002 20:53:29.575887  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:29.576260  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:30.076121  103439 type.go:168] "Request Body" body=""
	I1002 20:53:30.076195  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:30.076543  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:30.076603  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:30.576211  103439 type.go:168] "Request Body" body=""
	I1002 20:53:30.576293  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:30.576650  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:31.076346  103439 type.go:168] "Request Body" body=""
	I1002 20:53:31.076423  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:31.076802  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:31.575356  103439 type.go:168] "Request Body" body=""
	I1002 20:53:31.575434  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:31.575808  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:32.075359  103439 type.go:168] "Request Body" body=""
	I1002 20:53:32.075437  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:32.075799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:32.575336  103439 type.go:168] "Request Body" body=""
	I1002 20:53:32.575410  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:32.575777  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:32.575837  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:33.075392  103439 type.go:168] "Request Body" body=""
	I1002 20:53:33.075475  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:33.075865  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:33.575440  103439 type.go:168] "Request Body" body=""
	I1002 20:53:33.575517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:33.575846  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:34.075534  103439 type.go:168] "Request Body" body=""
	I1002 20:53:34.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:34.075996  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:34.575566  103439 type.go:168] "Request Body" body=""
	I1002 20:53:34.575655  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:34.576020  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:34.576093  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:35.075839  103439 type.go:168] "Request Body" body=""
	I1002 20:53:35.075921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:35.076292  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:35.575879  103439 type.go:168] "Request Body" body=""
	I1002 20:53:35.575953  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:35.576311  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:36.075998  103439 type.go:168] "Request Body" body=""
	I1002 20:53:36.076095  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:36.076469  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:36.576150  103439 type.go:168] "Request Body" body=""
	I1002 20:53:36.576229  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:36.576577  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:36.576639  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:37.076335  103439 type.go:168] "Request Body" body=""
	I1002 20:53:37.076417  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:37.076801  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:37.575377  103439 type.go:168] "Request Body" body=""
	I1002 20:53:37.575453  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:37.575879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:38.075474  103439 type.go:168] "Request Body" body=""
	I1002 20:53:38.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:38.075957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:38.575859  103439 type.go:168] "Request Body" body=""
	I1002 20:53:38.575935  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:38.576296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:39.076017  103439 type.go:168] "Request Body" body=""
	I1002 20:53:39.076111  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:39.076475  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:39.076596  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:39.576181  103439 type.go:168] "Request Body" body=""
	I1002 20:53:39.576257  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:39.576614  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:40.075456  103439 type.go:168] "Request Body" body=""
	I1002 20:53:40.075533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:40.075956  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:40.575509  103439 type.go:168] "Request Body" body=""
	I1002 20:53:40.575586  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:40.575951  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:41.075524  103439 type.go:168] "Request Body" body=""
	I1002 20:53:41.075607  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:41.075983  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:41.575591  103439 type.go:168] "Request Body" body=""
	I1002 20:53:41.575678  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:41.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:41.576118  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:42.075648  103439 type.go:168] "Request Body" body=""
	I1002 20:53:42.075731  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:42.076108  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:42.575677  103439 type.go:168] "Request Body" body=""
	I1002 20:53:42.575790  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:42.576150  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:43.075731  103439 type.go:168] "Request Body" body=""
	I1002 20:53:43.075831  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:43.076198  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:43.575889  103439 type.go:168] "Request Body" body=""
	I1002 20:53:43.575972  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:43.576366  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:43.576426  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:44.075602  103439 type.go:168] "Request Body" body=""
	I1002 20:53:44.075701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:44.076125  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:44.575700  103439 type.go:168] "Request Body" body=""
	I1002 20:53:44.575816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:44.576238  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:45.076167  103439 type.go:168] "Request Body" body=""
	I1002 20:53:45.076247  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:45.076676  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:45.576379  103439 type.go:168] "Request Body" body=""
	I1002 20:53:45.576462  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:45.576855  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:45.576932  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:46.075425  103439 type.go:168] "Request Body" body=""
	I1002 20:53:46.075515  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:46.075882  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:46.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:53:46.575563  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:46.575944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:47.075576  103439 type.go:168] "Request Body" body=""
	I1002 20:53:47.075649  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:47.076028  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:47.575645  103439 type.go:168] "Request Body" body=""
	I1002 20:53:47.575724  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:47.576173  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:48.075842  103439 type.go:168] "Request Body" body=""
	I1002 20:53:48.075922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:48.076288  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:48.076360  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:48.576176  103439 type.go:168] "Request Body" body=""
	I1002 20:53:48.576259  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:48.576606  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:49.076289  103439 type.go:168] "Request Body" body=""
	I1002 20:53:49.076364  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:49.076718  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:49.575397  103439 type.go:168] "Request Body" body=""
	I1002 20:53:49.575476  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:49.575864  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:50.075484  103439 type.go:168] "Request Body" body=""
	I1002 20:53:50.075575  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:50.075985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:50.575634  103439 type.go:168] "Request Body" body=""
	I1002 20:53:50.575725  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:50.576140  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:50.576223  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:51.075766  103439 type.go:168] "Request Body" body=""
	I1002 20:53:51.075855  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:51.076251  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:51.575845  103439 type.go:168] "Request Body" body=""
	I1002 20:53:51.575936  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:51.576310  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:52.076007  103439 type.go:168] "Request Body" body=""
	I1002 20:53:52.076100  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:52.076512  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:52.576200  103439 type.go:168] "Request Body" body=""
	I1002 20:53:52.576311  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:52.576659  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:52.576723  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:53.076346  103439 type.go:168] "Request Body" body=""
	I1002 20:53:53.076426  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:53.076819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:53.575357  103439 type.go:168] "Request Body" body=""
	I1002 20:53:53.575435  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:53.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:54.075408  103439 type.go:168] "Request Body" body=""
	I1002 20:53:54.075485  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:54.075889  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:54.575457  103439 type.go:168] "Request Body" body=""
	I1002 20:53:54.575534  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:54.575882  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:55.075838  103439 type.go:168] "Request Body" body=""
	I1002 20:53:55.075915  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:55.076266  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:55.076327  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:55.575878  103439 type.go:168] "Request Body" body=""
	I1002 20:53:55.575957  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:55.576307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:56.075931  103439 type.go:168] "Request Body" body=""
	I1002 20:53:56.076017  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:56.076382  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:56.576046  103439 type.go:168] "Request Body" body=""
	I1002 20:53:56.576133  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:56.576476  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:57.076106  103439 type.go:168] "Request Body" body=""
	I1002 20:53:57.076183  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:57.076505  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:57.076565  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:57.576226  103439 type.go:168] "Request Body" body=""
	I1002 20:53:57.576298  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:57.576629  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:58.076297  103439 type.go:168] "Request Body" body=""
	I1002 20:53:58.076394  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:58.076731  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:58.575639  103439 type.go:168] "Request Body" body=""
	I1002 20:53:58.575725  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:58.576105  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:59.075691  103439 type.go:168] "Request Body" body=""
	I1002 20:53:59.075862  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:59.076223  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:59.575805  103439 type.go:168] "Request Body" body=""
	I1002 20:53:59.575887  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:59.576267  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:59.576342  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:00.076234  103439 type.go:168] "Request Body" body=""
	I1002 20:54:00.076318  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:00.076665  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:00.576298  103439 type.go:168] "Request Body" body=""
	I1002 20:54:00.576374  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:00.576723  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:01.075366  103439 type.go:168] "Request Body" body=""
	I1002 20:54:01.075454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:01.075825  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:01.575447  103439 type.go:168] "Request Body" body=""
	I1002 20:54:01.575533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:01.575904  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:02.075556  103439 type.go:168] "Request Body" body=""
	I1002 20:54:02.075644  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:02.076053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:02.076132  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:02.575602  103439 type.go:168] "Request Body" body=""
	I1002 20:54:02.575678  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:02.576035  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:03.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:03.075713  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:03.076098  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:03.575655  103439 type.go:168] "Request Body" body=""
	I1002 20:54:03.575732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:03.576098  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:04.075645  103439 type.go:168] "Request Body" body=""
	I1002 20:54:04.075732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:04.076102  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:04.076162  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:04.575674  103439 type.go:168] "Request Body" body=""
	I1002 20:54:04.575774  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:04.576120  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:05.075981  103439 type.go:168] "Request Body" body=""
	I1002 20:54:05.076063  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:05.076424  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:05.576045  103439 type.go:168] "Request Body" body=""
	I1002 20:54:05.576128  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:05.576498  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:06.076278  103439 type.go:168] "Request Body" body=""
	I1002 20:54:06.076361  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:06.076719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:06.076815  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:06.575347  103439 type.go:168] "Request Body" body=""
	I1002 20:54:06.575428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:06.575821  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:07.075435  103439 type.go:168] "Request Body" body=""
	I1002 20:54:07.075516  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:07.075897  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:07.575486  103439 type.go:168] "Request Body" body=""
	I1002 20:54:07.575563  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:07.575958  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:08.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:08.075701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:08.076060  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:08.575979  103439 type.go:168] "Request Body" body=""
	I1002 20:54:08.576066  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:08.576467  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:08.576529  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:09.076208  103439 type.go:168] "Request Body" body=""
	I1002 20:54:09.076292  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:09.076707  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:09.576320  103439 type.go:168] "Request Body" body=""
	I1002 20:54:09.576395  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:09.576817  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:10.075592  103439 type.go:168] "Request Body" body=""
	I1002 20:54:10.075669  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:10.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:10.575606  103439 type.go:168] "Request Body" body=""
	I1002 20:54:10.575688  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:10.576056  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:11.075680  103439 type.go:168] "Request Body" body=""
	I1002 20:54:11.075788  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:11.076183  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:11.076274  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:11.575788  103439 type.go:168] "Request Body" body=""
	I1002 20:54:11.575870  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:11.576222  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:12.075860  103439 type.go:168] "Request Body" body=""
	I1002 20:54:12.075940  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:12.076307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:12.575971  103439 type.go:168] "Request Body" body=""
	I1002 20:54:12.576043  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:12.576403  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:13.076171  103439 type.go:168] "Request Body" body=""
	I1002 20:54:13.076258  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:13.076628  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:13.076688  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:13.576261  103439 type.go:168] "Request Body" body=""
	I1002 20:54:13.576339  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:13.576685  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:14.076408  103439 type.go:168] "Request Body" body=""
	I1002 20:54:14.076488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:14.076857  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:14.575484  103439 type.go:168] "Request Body" body=""
	I1002 20:54:14.575582  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:14.575948  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:15.075808  103439 type.go:168] "Request Body" body=""
	I1002 20:54:15.075891  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:15.076275  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:15.575894  103439 type.go:168] "Request Body" body=""
	I1002 20:54:15.575975  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:15.576435  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:15.576516  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:16.076119  103439 type.go:168] "Request Body" body=""
	I1002 20:54:16.076226  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:16.076603  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:16.576326  103439 type.go:168] "Request Body" body=""
	I1002 20:54:16.576403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:16.576788  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:17.075351  103439 type.go:168] "Request Body" body=""
	I1002 20:54:17.075430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:17.075787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:17.575401  103439 type.go:168] "Request Body" body=""
	I1002 20:54:17.575559  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:17.575961  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:18.075538  103439 type.go:168] "Request Body" body=""
	I1002 20:54:18.075619  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:18.075997  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:18.076063  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:18.575954  103439 type.go:168] "Request Body" body=""
	I1002 20:54:18.576031  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:18.576391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:19.076057  103439 type.go:168] "Request Body" body=""
	I1002 20:54:19.076145  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:19.076521  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:19.576266  103439 type.go:168] "Request Body" body=""
	I1002 20:54:19.576354  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:19.576728  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:20.075522  103439 type.go:168] "Request Body" body=""
	I1002 20:54:20.075613  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:20.075992  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:20.575620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:20.575699  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:20.576111  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:20.576172  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:21.075690  103439 type.go:168] "Request Body" body=""
	I1002 20:54:21.075834  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:21.076211  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:21.575853  103439 type.go:168] "Request Body" body=""
	I1002 20:54:21.575938  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:21.576327  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:22.076012  103439 type.go:168] "Request Body" body=""
	I1002 20:54:22.076106  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:22.076455  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:22.576180  103439 type.go:168] "Request Body" body=""
	I1002 20:54:22.576267  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:22.576639  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:22.576703  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:23.076280  103439 type.go:168] "Request Body" body=""
	I1002 20:54:23.076362  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:23.076729  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:23.575332  103439 type.go:168] "Request Body" body=""
	I1002 20:54:23.575409  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:23.575788  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:24.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:54:24.075455  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:24.075827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:24.575436  103439 type.go:168] "Request Body" body=""
	I1002 20:54:24.575524  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:24.575897  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:25.075680  103439 type.go:168] "Request Body" body=""
	I1002 20:54:25.075782  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:25.076141  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:25.076204  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:25.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:54:25.575836  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:25.576238  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:26.075827  103439 type.go:168] "Request Body" body=""
	I1002 20:54:26.075905  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:26.076277  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:26.576092  103439 type.go:168] "Request Body" body=""
	I1002 20:54:26.576245  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:26.576650  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:27.076357  103439 type.go:168] "Request Body" body=""
	I1002 20:54:27.076442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:27.076807  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:27.076864  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:27.575463  103439 type.go:168] "Request Body" body=""
	I1002 20:54:27.575541  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:27.576016  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:28.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:28.075717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:28.076117  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:28.576130  103439 type.go:168] "Request Body" body=""
	I1002 20:54:28.576214  103439 node_ready.go:38] duration metric: took 6m0.001003861s for node "functional-012915" to be "Ready" ...
	I1002 20:54:28.579396  103439 out.go:203] 
	W1002 20:54:28.581273  103439 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:54:28.581294  103439 out.go:285] * 
	W1002 20:54:28.583020  103439 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:54:28.584974  103439 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:54:19 functional-012915 crio[2919]: time="2025-10-02T20:54:19.885114017Z" level=info msg="createCtr: deleting container 16564a8f8036bc7c90ccf24d061c487f09a6b071956df918122e4f456fc0e7c6 from storage" id=e74c4936-85a6-40d8-b6dd-479d3713227a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:19 functional-012915 crio[2919]: time="2025-10-02T20:54:19.888920116Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-012915_kube-system_7e750209f40bc1241cc38d19476e612c_0" id=dfc199da-232e-450b-83c4-4863712b12ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:19 functional-012915 crio[2919]: time="2025-10-02T20:54:19.889319932Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-012915_kube-system_8a66ab49d7c80b396ab0e8b46c39b696_0" id=e74c4936-85a6-40d8-b6dd-479d3713227a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.855296992Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=94437663-21a7-4f9b-8633-2d64066323f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.856192975Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=462c0d6e-8b0b-4a2b-9c40-b6510da69b60 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.856992012Z" level=info msg="Creating container: kube-system/etcd-functional-012915/etcd" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.857191241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.860550955Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.861148899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.876921598Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.878372533Z" level=info msg="createCtr: deleting container ID 77d657b22b129eb4d802555132e0f22eec77d8bb32503612919b7da6337e7b56 from idIndex" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.878412292Z" level=info msg="createCtr: removing container 77d657b22b129eb4d802555132e0f22eec77d8bb32503612919b7da6337e7b56" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.878448047Z" level=info msg="createCtr: deleting container 77d657b22b129eb4d802555132e0f22eec77d8bb32503612919b7da6337e7b56 from storage" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:27 functional-012915 crio[2919]: time="2025-10-02T20:54:27.880726986Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-012915_kube-system_d8a261ecdc32dae77705c4d6c0276f2f_0" id=7d743606-b2b3-42bc-84a3-16612f523d59 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.855493992Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=481ac10c-de12-458c-abb1-8096200aa5b5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.856500268Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=f651de02-7be7-42fd-87f9-0472131057d6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.857511372Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-012915/kube-apiserver" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.857835372Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.862332144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.862929501Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.878752653Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.880391279Z" level=info msg="createCtr: deleting container ID d8faf932eb44fdb196b9250632b1530f83d306077ca2c3817efaa5544ccf0842 from idIndex" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.880428353Z" level=info msg="createCtr: removing container d8faf932eb44fdb196b9250632b1530f83d306077ca2c3817efaa5544ccf0842" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.8804592Z" level=info msg="createCtr: deleting container d8faf932eb44fdb196b9250632b1530f83d306077ca2c3817efaa5544ccf0842 from storage" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:29 functional-012915 crio[2919]: time="2025-10-02T20:54:29.882638615Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-012915_kube-system_71bc375daf4e76699563858eee44bc44_0" id=b20235fc-d91f-4ad8-9822-8b26102e9d29 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:54:32.453839    4490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:32.454795    4490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:32.456532    4490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:32.457011    4490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:32.458535    4490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:54:32 up  2:36,  0 user,  load average: 0.66, 0.16, 0.36
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:54:23 functional-012915 kubelet[1773]: E1002 20:54:23.537408    1773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:54:23 functional-012915 kubelet[1773]: I1002 20:54:23.741320    1773 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 20:54:23 functional-012915 kubelet[1773]: E1002 20:54:23.741779    1773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 20:54:27 functional-012915 kubelet[1773]: E1002 20:54:27.854851    1773 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 20:54:27 functional-012915 kubelet[1773]: E1002 20:54:27.881041    1773 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:54:27 functional-012915 kubelet[1773]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:27 functional-012915 kubelet[1773]:  > podSandboxID="585b4230bcb56046e825d4238227e61b36dc2e8921ea6147c622b6bed61a91bf"
	Oct 02 20:54:27 functional-012915 kubelet[1773]: E1002 20:54:27.881140    1773 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:54:27 functional-012915 kubelet[1773]:         container etcd start failed in pod etcd-functional-012915_kube-system(d8a261ecdc32dae77705c4d6c0276f2f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:27 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:54:27 functional-012915 kubelet[1773]: E1002 20:54:27.881170    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-012915" podUID="d8a261ecdc32dae77705c4d6c0276f2f"
	Oct 02 20:54:29 functional-012915 kubelet[1773]: E1002 20:54:29.324986    1773 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 20:54:29 functional-012915 kubelet[1773]: E1002 20:54:29.855017    1773 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 20:54:29 functional-012915 kubelet[1773]: E1002 20:54:29.882979    1773 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:54:29 functional-012915 kubelet[1773]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:29 functional-012915 kubelet[1773]:  > podSandboxID="c697c06eaaf20ef2888311ed130f6d0dab82776628f2d6e3d184e9abb1e08331"
	Oct 02 20:54:29 functional-012915 kubelet[1773]: E1002 20:54:29.883098    1773 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:54:29 functional-012915 kubelet[1773]:         container kube-apiserver start failed in pod kube-apiserver-functional-012915_kube-system(71bc375daf4e76699563858eee44bc44): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:29 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:54:29 functional-012915 kubelet[1773]: E1002 20:54:29.883130    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-012915" podUID="71bc375daf4e76699563858eee44bc44"
	Oct 02 20:54:30 functional-012915 kubelet[1773]: E1002 20:54:30.010432    1773 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 20:54:30 functional-012915 kubelet[1773]: E1002 20:54:30.319199    1773 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-012915.186ac76a13674072\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac76a13674072  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 20:44:22.84759461 +0000 UTC m=+0.324743301,LastTimestamp:2025-10-02 20:44:22.84910367 +0000 UTC m=+0.326252362,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingIn
stance:functional-012915,}"
	Oct 02 20:54:30 functional-012915 kubelet[1773]: E1002 20:54:30.538652    1773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:54:30 functional-012915 kubelet[1773]: I1002 20:54:30.743699    1773 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 20:54:30 functional-012915 kubelet[1773]: E1002 20:54:30.744162    1773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (308.417939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (2.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 kubectl -- --context functional-012915 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 kubectl -- --context functional-012915 get pods: exit status 1 (97.650072ms)

                                                
                                                
** stderr ** 
	E1002 20:54:39.582717  108901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:39.583117  108901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:39.584544  108901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:39.584872  108901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:39.586263  108901 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-amd64 -p functional-012915 kubectl -- --context functional-012915 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (287.605193ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-461767 --log_dir /tmp/nospam-461767 pause                                                              │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ delete  │ -p nospam-461767                                                                                              │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ start   │ -p functional-012915 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │                     │
	│ start   │ -p functional-012915 --alsologtostderr -v=8                                                                   │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:48 UTC │                     │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:3.1                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:3.3                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:latest                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add minikube-local-cache-test:functional-012915                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache delete minikube-local-cache-test:functional-012915                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl images                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	│ cache   │ functional-012915 cache reload                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ kubectl │ functional-012915 kubectl -- --context functional-012915 get pods                                             │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:48:24
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:48:24.799042  103439 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:48:24.799301  103439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:24.799310  103439 out.go:374] Setting ErrFile to fd 2...
	I1002 20:48:24.799319  103439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:24.799517  103439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:48:24.799997  103439 out.go:368] Setting JSON to false
	I1002 20:48:24.800864  103439 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9046,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:48:24.800953  103439 start.go:140] virtualization: kvm guest
	I1002 20:48:24.803402  103439 out.go:179] * [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:48:24.804691  103439 notify.go:220] Checking for updates...
	I1002 20:48:24.804714  103439 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:48:24.806239  103439 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:48:24.807535  103439 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:24.808966  103439 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:48:24.810229  103439 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:48:24.811490  103439 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:48:24.813239  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:24.813364  103439 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:48:24.837336  103439 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:48:24.837438  103439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:48:24.897484  103439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:48:24.886469072 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:48:24.897616  103439 docker.go:318] overlay module found
	I1002 20:48:24.900384  103439 out.go:179] * Using the docker driver based on existing profile
	I1002 20:48:24.901640  103439 start.go:304] selected driver: docker
	I1002 20:48:24.901656  103439 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:24.901817  103439 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:48:24.901921  103439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:48:24.957281  103439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:48:24.94713494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:48:24.957915  103439 cni.go:84] Creating CNI manager for ""
	I1002 20:48:24.957982  103439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:48:24.958030  103439 start.go:348] cluster config:
	{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:24.959902  103439 out.go:179] * Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	I1002 20:48:24.961424  103439 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 20:48:24.962912  103439 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:48:24.964111  103439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:48:24.964148  103439 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:48:24.964157  103439 cache.go:58] Caching tarball of preloaded images
	I1002 20:48:24.964205  103439 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:48:24.964264  103439 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:48:24.964275  103439 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:48:24.964363  103439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json ...
	I1002 20:48:24.984848  103439 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:48:24.984867  103439 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:48:24.984883  103439 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:48:24.984905  103439 start.go:360] acquireMachinesLock for functional-012915: {Name:mk05b0465db6f8234fcb55c21a78a37886923b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:48:24.984974  103439 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "functional-012915"
	I1002 20:48:24.984991  103439 start.go:96] Skipping create...Using existing machine configuration
	I1002 20:48:24.984998  103439 fix.go:54] fixHost starting: 
	I1002 20:48:24.985199  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:25.001871  103439 fix.go:112] recreateIfNeeded on functional-012915: state=Running err=<nil>
	W1002 20:48:25.001898  103439 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 20:48:25.003929  103439 out.go:252] * Updating the running docker "functional-012915" container ...
	I1002 20:48:25.003964  103439 machine.go:93] provisionDockerMachine start ...
	I1002 20:48:25.004037  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.020996  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.021230  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.021243  103439 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:48:25.163676  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:48:25.163710  103439 ubuntu.go:182] provisioning hostname "functional-012915"
	I1002 20:48:25.163781  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.181773  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.181995  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.182012  103439 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-012915 && echo "functional-012915" | sudo tee /etc/hostname
	I1002 20:48:25.333959  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:48:25.334023  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.352331  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.352586  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.352605  103439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-012915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-012915/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-012915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:48:25.495627  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:48:25.495660  103439 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 20:48:25.495680  103439 ubuntu.go:190] setting up certificates
	I1002 20:48:25.495691  103439 provision.go:84] configureAuth start
	I1002 20:48:25.495761  103439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:48:25.513229  103439 provision.go:143] copyHostCerts
	I1002 20:48:25.513269  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:48:25.513297  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 20:48:25.513309  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:48:25.513378  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 20:48:25.513471  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:48:25.513489  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 20:48:25.513496  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:48:25.513524  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 20:48:25.513585  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:48:25.513606  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 20:48:25.513612  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:48:25.513642  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 20:48:25.513706  103439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.functional-012915 san=[127.0.0.1 192.168.49.2 functional-012915 localhost minikube]
	I1002 20:48:25.699700  103439 provision.go:177] copyRemoteCerts
	I1002 20:48:25.699774  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:48:25.699818  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.717132  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:25.819529  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:48:25.819590  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:48:25.836961  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:48:25.837026  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:48:25.853991  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:48:25.854053  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:48:25.872348  103439 provision.go:87] duration metric: took 376.642239ms to configureAuth
	I1002 20:48:25.872378  103439 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:48:25.872536  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:25.872653  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.891454  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.891685  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.891706  103439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:48:26.156804  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:48:26.156829  103439 machine.go:96] duration metric: took 1.152858016s to provisionDockerMachine
	I1002 20:48:26.156858  103439 start.go:293] postStartSetup for "functional-012915" (driver="docker")
	I1002 20:48:26.156868  103439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:48:26.156920  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:48:26.156969  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.176188  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.278892  103439 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:48:26.282350  103439 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 20:48:26.282380  103439 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 20:48:26.282385  103439 command_runner.go:130] > VERSION_ID="12"
	I1002 20:48:26.282389  103439 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 20:48:26.282393  103439 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 20:48:26.282397  103439 command_runner.go:130] > ID=debian
	I1002 20:48:26.282401  103439 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 20:48:26.282406  103439 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 20:48:26.282410  103439 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 20:48:26.282454  103439 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:48:26.282471  103439 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:48:26.282480  103439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 20:48:26.282532  103439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 20:48:26.282613  103439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 20:48:26.282622  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 20:48:26.282689  103439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> hosts in /etc/test/nested/copy/84100
	I1002 20:48:26.282696  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> /etc/test/nested/copy/84100/hosts
	I1002 20:48:26.282728  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/84100
	I1002 20:48:26.291027  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:48:26.308674  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts --> /etc/test/nested/copy/84100/hosts (40 bytes)
	I1002 20:48:26.325806  103439 start.go:296] duration metric: took 168.930408ms for postStartSetup
	I1002 20:48:26.325916  103439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:48:26.325957  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.343664  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.443702  103439 command_runner.go:130] > 54%
	I1002 20:48:26.443812  103439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:48:26.449039  103439 command_runner.go:130] > 135G
	I1002 20:48:26.449077  103439 fix.go:56] duration metric: took 1.464076482s for fixHost
	I1002 20:48:26.449092  103439 start.go:83] releasing machines lock for "functional-012915", held for 1.464107586s
	I1002 20:48:26.449173  103439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:48:26.467196  103439 ssh_runner.go:195] Run: cat /version.json
	I1002 20:48:26.467258  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.467342  103439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:48:26.467420  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.485438  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.485701  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.633417  103439 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 20:48:26.635353  103439 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 20:48:26.635549  103439 ssh_runner.go:195] Run: systemctl --version
	I1002 20:48:26.642439  103439 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 20:48:26.642484  103439 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 20:48:26.642544  103439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:48:26.678549  103439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 20:48:26.683206  103439 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 20:48:26.683277  103439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:48:26.683333  103439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:48:26.691349  103439 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:48:26.691374  103439 start.go:495] detecting cgroup driver to use...
	I1002 20:48:26.691404  103439 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:48:26.691448  103439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:48:26.705612  103439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:48:26.718317  103439 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:48:26.718372  103439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:48:26.732790  103439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:48:26.745127  103439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:48:26.830208  103439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:48:26.916089  103439 docker.go:234] disabling docker service ...
	I1002 20:48:26.916158  103439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:48:26.931041  103439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:48:26.944314  103439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:48:27.029050  103439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:48:27.113127  103439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:48:27.125650  103439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:48:27.138813  103439 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 20:48:27.139624  103439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:48:27.139683  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.148622  103439 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:48:27.148678  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.157772  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.166537  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.175276  103439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:48:27.183311  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.192091  103439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.200250  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.208827  103439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:48:27.216057  103439 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 20:48:27.216134  103439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:48:27.223341  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:27.309631  103439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:48:27.427286  103439 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:48:27.427366  103439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:48:27.431839  103439 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 20:48:27.431866  103439 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 20:48:27.431885  103439 command_runner.go:130] > Device: 0,59	Inode: 3822        Links: 1
	I1002 20:48:27.431892  103439 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:48:27.431897  103439 command_runner.go:130] > Access: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431903  103439 command_runner.go:130] > Modify: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431907  103439 command_runner.go:130] > Change: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431912  103439 command_runner.go:130] >  Birth: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431962  103439 start.go:563] Will wait 60s for crictl version
	I1002 20:48:27.432014  103439 ssh_runner.go:195] Run: which crictl
	I1002 20:48:27.435939  103439 command_runner.go:130] > /usr/local/bin/crictl
	I1002 20:48:27.436036  103439 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:48:27.458416  103439 command_runner.go:130] > Version:  0.1.0
	I1002 20:48:27.458438  103439 command_runner.go:130] > RuntimeName:  cri-o
	I1002 20:48:27.458443  103439 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 20:48:27.458448  103439 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 20:48:27.460155  103439 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:48:27.460222  103439 ssh_runner.go:195] Run: crio --version
	I1002 20:48:27.486159  103439 command_runner.go:130] > crio version 1.34.1
	I1002 20:48:27.486183  103439 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:48:27.486190  103439 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:48:27.486198  103439 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:48:27.486205  103439 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:48:27.486212  103439 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:48:27.486219  103439 command_runner.go:130] >    Compiler:       gc
	I1002 20:48:27.486226  103439 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:48:27.486237  103439 command_runner.go:130] >    Linkmode:       static
	I1002 20:48:27.486246  103439 command_runner.go:130] >    BuildTags:
	I1002 20:48:27.486251  103439 command_runner.go:130] >      static
	I1002 20:48:27.486259  103439 command_runner.go:130] >      netgo
	I1002 20:48:27.486263  103439 command_runner.go:130] >      osusergo
	I1002 20:48:27.486266  103439 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:48:27.486272  103439 command_runner.go:130] >      seccomp
	I1002 20:48:27.486276  103439 command_runner.go:130] >      apparmor
	I1002 20:48:27.486300  103439 command_runner.go:130] >      selinux
	I1002 20:48:27.486312  103439 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:48:27.486330  103439 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:48:27.486339  103439 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:48:27.487532  103439 ssh_runner.go:195] Run: crio --version
	I1002 20:48:27.514593  103439 command_runner.go:130] > crio version 1.34.1
	I1002 20:48:27.514624  103439 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:48:27.514630  103439 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:48:27.514634  103439 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:48:27.514639  103439 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:48:27.514643  103439 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:48:27.514647  103439 command_runner.go:130] >    Compiler:       gc
	I1002 20:48:27.514654  103439 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:48:27.514658  103439 command_runner.go:130] >    Linkmode:       static
	I1002 20:48:27.514662  103439 command_runner.go:130] >    BuildTags:
	I1002 20:48:27.514665  103439 command_runner.go:130] >      static
	I1002 20:48:27.514668  103439 command_runner.go:130] >      netgo
	I1002 20:48:27.514677  103439 command_runner.go:130] >      osusergo
	I1002 20:48:27.514685  103439 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:48:27.514688  103439 command_runner.go:130] >      seccomp
	I1002 20:48:27.514691  103439 command_runner.go:130] >      apparmor
	I1002 20:48:27.514695  103439 command_runner.go:130] >      selinux
	I1002 20:48:27.514699  103439 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:48:27.514706  103439 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:48:27.514709  103439 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:48:27.516768  103439 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:48:27.518063  103439 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:48:27.535001  103439 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:48:27.539645  103439 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 20:48:27.539759  103439 kubeadm.go:883] updating cluster {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:48:27.539875  103439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:48:27.539928  103439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:48:27.571471  103439 command_runner.go:130] > {
	I1002 20:48:27.571489  103439 command_runner.go:130] >   "images":  [
	I1002 20:48:27.571493  103439 command_runner.go:130] >     {
	I1002 20:48:27.571502  103439 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:48:27.571507  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571513  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:48:27.571516  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571520  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571528  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:48:27.571535  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:48:27.571539  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571543  103439 command_runner.go:130] >       "size":  "109379124",
	I1002 20:48:27.571547  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571554  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571560  103439 command_runner.go:130] >     },
	I1002 20:48:27.571568  103439 command_runner.go:130] >     {
	I1002 20:48:27.571574  103439 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:48:27.571577  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571583  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:48:27.571588  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571592  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571600  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:48:27.571610  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:48:27.571616  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571620  103439 command_runner.go:130] >       "size":  "31470524",
	I1002 20:48:27.571626  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571633  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571644  103439 command_runner.go:130] >     },
	I1002 20:48:27.571650  103439 command_runner.go:130] >     {
	I1002 20:48:27.571656  103439 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:48:27.571662  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571667  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:48:27.571672  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571676  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571685  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:48:27.571694  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:48:27.571700  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571704  103439 command_runner.go:130] >       "size":  "76103547",
	I1002 20:48:27.571710  103439 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:48:27.571714  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571719  103439 command_runner.go:130] >     },
	I1002 20:48:27.571721  103439 command_runner.go:130] >     {
	I1002 20:48:27.571727  103439 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:48:27.571733  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571752  103439 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:48:27.571758  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571767  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571778  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:48:27.571787  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:48:27.571792  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571796  103439 command_runner.go:130] >       "size":  "195976448",
	I1002 20:48:27.571802  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571805  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.571810  103439 command_runner.go:130] >       },
	I1002 20:48:27.571824  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571831  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571834  103439 command_runner.go:130] >     },
	I1002 20:48:27.571838  103439 command_runner.go:130] >     {
	I1002 20:48:27.571844  103439 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:48:27.571850  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571859  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:48:27.571866  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571870  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571879  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:48:27.571888  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:48:27.571894  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571898  103439 command_runner.go:130] >       "size":  "89046001",
	I1002 20:48:27.571903  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571907  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.571913  103439 command_runner.go:130] >       },
	I1002 20:48:27.571916  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571922  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571925  103439 command_runner.go:130] >     },
	I1002 20:48:27.571931  103439 command_runner.go:130] >     {
	I1002 20:48:27.571937  103439 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:48:27.571943  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571948  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:48:27.571953  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571957  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571967  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:48:27.571976  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:48:27.571981  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571985  103439 command_runner.go:130] >       "size":  "76004181",
	I1002 20:48:27.571991  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571994  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.572000  103439 command_runner.go:130] >       },
	I1002 20:48:27.572003  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572009  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572012  103439 command_runner.go:130] >     },
	I1002 20:48:27.572015  103439 command_runner.go:130] >     {
	I1002 20:48:27.572023  103439 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:48:27.572027  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572038  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:48:27.572048  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572054  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572061  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:48:27.572070  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:48:27.572076  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572080  103439 command_runner.go:130] >       "size":  "73138073",
	I1002 20:48:27.572085  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572089  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572095  103439 command_runner.go:130] >     },
	I1002 20:48:27.572098  103439 command_runner.go:130] >     {
	I1002 20:48:27.572106  103439 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:48:27.572109  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572114  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:48:27.572119  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572123  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572132  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:48:27.572157  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:48:27.572163  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572167  103439 command_runner.go:130] >       "size":  "53844823",
	I1002 20:48:27.572172  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.572175  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.572180  103439 command_runner.go:130] >       },
	I1002 20:48:27.572184  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572189  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572192  103439 command_runner.go:130] >     },
	I1002 20:48:27.572197  103439 command_runner.go:130] >     {
	I1002 20:48:27.572203  103439 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:48:27.572206  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572213  103439 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.572217  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572222  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572229  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:48:27.572237  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:48:27.572248  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572254  103439 command_runner.go:130] >       "size":  "742092",
	I1002 20:48:27.572258  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.572263  103439 command_runner.go:130] >         "value":  "65535"
	I1002 20:48:27.572267  103439 command_runner.go:130] >       },
	I1002 20:48:27.572273  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572282  103439 command_runner.go:130] >       "pinned":  true
	I1002 20:48:27.572288  103439 command_runner.go:130] >     }
	I1002 20:48:27.572291  103439 command_runner.go:130] >   ]
	I1002 20:48:27.572295  103439 command_runner.go:130] > }
	I1002 20:48:27.573606  103439 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:48:27.573628  103439 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:48:27.573687  103439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:48:27.599395  103439 command_runner.go:130] > {
	I1002 20:48:27.599418  103439 command_runner.go:130] >   "images":  [
	I1002 20:48:27.599424  103439 command_runner.go:130] >     {
	I1002 20:48:27.599434  103439 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:48:27.599439  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599447  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:48:27.599452  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599460  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599473  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:48:27.599500  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:48:27.599510  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599518  103439 command_runner.go:130] >       "size":  "109379124",
	I1002 20:48:27.599526  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599540  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599549  103439 command_runner.go:130] >     },
	I1002 20:48:27.599555  103439 command_runner.go:130] >     {
	I1002 20:48:27.599575  103439 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:48:27.599582  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599590  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:48:27.599596  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599604  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599624  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:48:27.599640  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:48:27.599648  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599656  103439 command_runner.go:130] >       "size":  "31470524",
	I1002 20:48:27.599664  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599676  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599684  103439 command_runner.go:130] >     },
	I1002 20:48:27.599690  103439 command_runner.go:130] >     {
	I1002 20:48:27.599703  103439 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:48:27.599713  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599722  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:48:27.599730  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599754  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599770  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:48:27.599783  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:48:27.599791  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599798  103439 command_runner.go:130] >       "size":  "76103547",
	I1002 20:48:27.599808  103439 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:48:27.599815  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599823  103439 command_runner.go:130] >     },
	I1002 20:48:27.599829  103439 command_runner.go:130] >     {
	I1002 20:48:27.599840  103439 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:48:27.599849  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599858  103439 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:48:27.599865  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599873  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599887  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:48:27.599901  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:48:27.599918  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599927  103439 command_runner.go:130] >       "size":  "195976448",
	I1002 20:48:27.599934  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.599942  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.599948  103439 command_runner.go:130] >       },
	I1002 20:48:27.599974  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599984  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599989  103439 command_runner.go:130] >     },
	I1002 20:48:27.599994  103439 command_runner.go:130] >     {
	I1002 20:48:27.600004  103439 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:48:27.600013  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600021  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:48:27.600029  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600036  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600050  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:48:27.600065  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:48:27.600073  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600080  103439 command_runner.go:130] >       "size":  "89046001",
	I1002 20:48:27.600089  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600103  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600112  103439 command_runner.go:130] >       },
	I1002 20:48:27.600119  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600128  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600134  103439 command_runner.go:130] >     },
	I1002 20:48:27.600142  103439 command_runner.go:130] >     {
	I1002 20:48:27.600152  103439 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:48:27.600161  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600171  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:48:27.600179  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600185  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600199  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:48:27.600213  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:48:27.600220  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600233  103439 command_runner.go:130] >       "size":  "76004181",
	I1002 20:48:27.600242  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600250  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600258  103439 command_runner.go:130] >       },
	I1002 20:48:27.600264  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600273  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600278  103439 command_runner.go:130] >     },
	I1002 20:48:27.600284  103439 command_runner.go:130] >     {
	I1002 20:48:27.600297  103439 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:48:27.600306  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600315  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:48:27.600332  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600339  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600354  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:48:27.600368  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:48:27.600376  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600383  103439 command_runner.go:130] >       "size":  "73138073",
	I1002 20:48:27.600393  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600401  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600410  103439 command_runner.go:130] >     },
	I1002 20:48:27.600415  103439 command_runner.go:130] >     {
	I1002 20:48:27.600423  103439 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:48:27.600428  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600437  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:48:27.600446  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600452  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600464  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:48:27.600497  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:48:27.600505  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600513  103439 command_runner.go:130] >       "size":  "53844823",
	I1002 20:48:27.600520  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600527  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600536  103439 command_runner.go:130] >       },
	I1002 20:48:27.600554  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600563  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600569  103439 command_runner.go:130] >     },
	I1002 20:48:27.600574  103439 command_runner.go:130] >     {
	I1002 20:48:27.600585  103439 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:48:27.600594  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600603  103439 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.600611  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600618  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600631  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:48:27.600643  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:48:27.600652  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600659  103439 command_runner.go:130] >       "size":  "742092",
	I1002 20:48:27.600668  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600676  103439 command_runner.go:130] >         "value":  "65535"
	I1002 20:48:27.600684  103439 command_runner.go:130] >       },
	I1002 20:48:27.600692  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600701  103439 command_runner.go:130] >       "pinned":  true
	I1002 20:48:27.600708  103439 command_runner.go:130] >     }
	I1002 20:48:27.600716  103439 command_runner.go:130] >   ]
	I1002 20:48:27.600721  103439 command_runner.go:130] > }
	I1002 20:48:27.600844  103439 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:48:27.600859  103439 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:48:27.600868  103439 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:48:27.600982  103439 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:48:27.601057  103439 ssh_runner.go:195] Run: crio config
	I1002 20:48:27.642390  103439 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 20:48:27.642423  103439 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 20:48:27.642435  103439 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 20:48:27.642439  103439 command_runner.go:130] > #
	I1002 20:48:27.642450  103439 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 20:48:27.642460  103439 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 20:48:27.642470  103439 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 20:48:27.642501  103439 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 20:48:27.642510  103439 command_runner.go:130] > # reload'.
	I1002 20:48:27.642520  103439 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 20:48:27.642532  103439 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 20:48:27.642543  103439 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 20:48:27.642558  103439 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 20:48:27.642563  103439 command_runner.go:130] > [crio]
	I1002 20:48:27.642572  103439 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 20:48:27.642580  103439 command_runner.go:130] > # containers images, in this directory.
	I1002 20:48:27.642602  103439 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 20:48:27.642618  103439 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 20:48:27.642627  103439 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 20:48:27.642637  103439 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 20:48:27.642643  103439 command_runner.go:130] > # imagestore = ""
	I1002 20:48:27.642656  103439 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 20:48:27.642670  103439 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 20:48:27.642681  103439 command_runner.go:130] > # storage_driver = "overlay"
	I1002 20:48:27.642691  103439 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 20:48:27.642708  103439 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 20:48:27.642715  103439 command_runner.go:130] > # storage_option = [
	I1002 20:48:27.642723  103439 command_runner.go:130] > # ]
	I1002 20:48:27.642733  103439 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 20:48:27.642762  103439 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 20:48:27.642770  103439 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 20:48:27.642783  103439 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 20:48:27.642796  103439 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 20:48:27.642804  103439 command_runner.go:130] > # always happen on a node reboot
	I1002 20:48:27.642814  103439 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 20:48:27.642844  103439 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 20:48:27.642859  103439 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 20:48:27.642869  103439 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 20:48:27.642883  103439 command_runner.go:130] > # version_file_persist = ""
	I1002 20:48:27.642895  103439 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 20:48:27.642919  103439 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 20:48:27.642930  103439 command_runner.go:130] > # internal_wipe = true
	I1002 20:48:27.642942  103439 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 20:48:27.642957  103439 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 20:48:27.642963  103439 command_runner.go:130] > # internal_repair = true
	I1002 20:48:27.642972  103439 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 20:48:27.642981  103439 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 20:48:27.642990  103439 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 20:48:27.642998  103439 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 20:48:27.643012  103439 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 20:48:27.643018  103439 command_runner.go:130] > [crio.api]
	I1002 20:48:27.643028  103439 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 20:48:27.643038  103439 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 20:48:27.643047  103439 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 20:48:27.643058  103439 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 20:48:27.643068  103439 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 20:48:27.643081  103439 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 20:48:27.643088  103439 command_runner.go:130] > # stream_port = "0"
	I1002 20:48:27.643100  103439 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 20:48:27.643107  103439 command_runner.go:130] > # stream_enable_tls = false
	I1002 20:48:27.643117  103439 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 20:48:27.643126  103439 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 20:48:27.643137  103439 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 20:48:27.643149  103439 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 20:48:27.643154  103439 command_runner.go:130] > # stream_tls_cert = ""
	I1002 20:48:27.643169  103439 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 20:48:27.643178  103439 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 20:48:27.643188  103439 command_runner.go:130] > # stream_tls_key = ""
	I1002 20:48:27.643205  103439 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 20:48:27.643218  103439 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 20:48:27.643228  103439 command_runner.go:130] > # automatically pick up the changes.
	I1002 20:48:27.643241  103439 command_runner.go:130] > # stream_tls_ca = ""
	I1002 20:48:27.643279  103439 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:48:27.643300  103439 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 20:48:27.643322  103439 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:48:27.643333  103439 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 20:48:27.643343  103439 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 20:48:27.643352  103439 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 20:48:27.643370  103439 command_runner.go:130] > [crio.runtime]
	I1002 20:48:27.643381  103439 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 20:48:27.643393  103439 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 20:48:27.643403  103439 command_runner.go:130] > # "nofile=1024:2048"
	I1002 20:48:27.643414  103439 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 20:48:27.643423  103439 command_runner.go:130] > # default_ulimits = [
	I1002 20:48:27.643428  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643441  103439 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 20:48:27.643450  103439 command_runner.go:130] > # no_pivot = false
	I1002 20:48:27.643460  103439 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 20:48:27.643473  103439 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 20:48:27.643482  103439 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 20:48:27.643494  103439 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 20:48:27.643511  103439 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 20:48:27.643524  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:48:27.643532  103439 command_runner.go:130] > # conmon = ""
	I1002 20:48:27.643539  103439 command_runner.go:130] > # Cgroup setting for conmon
	I1002 20:48:27.643549  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 20:48:27.643556  103439 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 20:48:27.643565  103439 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 20:48:27.643572  103439 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 20:48:27.643582  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:48:27.643588  103439 command_runner.go:130] > # conmon_env = [
	I1002 20:48:27.643592  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643600  103439 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 20:48:27.643612  103439 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 20:48:27.643622  103439 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 20:48:27.643631  103439 command_runner.go:130] > # default_env = [
	I1002 20:48:27.643647  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643661  103439 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 20:48:27.643672  103439 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 20:48:27.643679  103439 command_runner.go:130] > # selinux = false
	I1002 20:48:27.643689  103439 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 20:48:27.643701  103439 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 20:48:27.643710  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643717  103439 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:48:27.643729  103439 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 20:48:27.643755  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643766  103439 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 20:48:27.643777  103439 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 20:48:27.643790  103439 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 20:48:27.643804  103439 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 20:48:27.643815  103439 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 20:48:27.643826  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643834  103439 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 20:48:27.643847  103439 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 20:48:27.643856  103439 command_runner.go:130] > # the cgroup blockio controller.
	I1002 20:48:27.643863  103439 command_runner.go:130] > # blockio_config_file = ""
	I1002 20:48:27.643875  103439 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 20:48:27.643886  103439 command_runner.go:130] > # blockio parameters.
	I1002 20:48:27.643892  103439 command_runner.go:130] > # blockio_reload = false
	I1002 20:48:27.643901  103439 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 20:48:27.643907  103439 command_runner.go:130] > # irqbalance daemon.
	I1002 20:48:27.643914  103439 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 20:48:27.643922  103439 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 20:48:27.643930  103439 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 20:48:27.643939  103439 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 20:48:27.643946  103439 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 20:48:27.643955  103439 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 20:48:27.643967  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643976  103439 command_runner.go:130] > # rdt_config_file = ""
	I1002 20:48:27.643991  103439 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 20:48:27.643998  103439 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 20:48:27.644004  103439 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 20:48:27.644010  103439 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 20:48:27.644016  103439 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 20:48:27.644022  103439 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 20:48:27.644026  103439 command_runner.go:130] > # will be added.
	I1002 20:48:27.644030  103439 command_runner.go:130] > # default_capabilities = [
	I1002 20:48:27.644036  103439 command_runner.go:130] > # 	"CHOWN",
	I1002 20:48:27.644039  103439 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 20:48:27.644042  103439 command_runner.go:130] > # 	"FSETID",
	I1002 20:48:27.644046  103439 command_runner.go:130] > # 	"FOWNER",
	I1002 20:48:27.644049  103439 command_runner.go:130] > # 	"SETGID",
	I1002 20:48:27.644077  103439 command_runner.go:130] > # 	"SETUID",
	I1002 20:48:27.644089  103439 command_runner.go:130] > # 	"SETPCAP",
	I1002 20:48:27.644096  103439 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 20:48:27.644099  103439 command_runner.go:130] > # 	"KILL",
	I1002 20:48:27.644102  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644111  103439 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 20:48:27.644117  103439 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 20:48:27.644124  103439 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 20:48:27.644129  103439 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 20:48:27.644137  103439 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:48:27.644140  103439 command_runner.go:130] > default_sysctls = [
	I1002 20:48:27.644146  103439 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 20:48:27.644149  103439 command_runner.go:130] > ]
	I1002 20:48:27.644153  103439 command_runner.go:130] > # List of devices on the host that a
	I1002 20:48:27.644159  103439 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 20:48:27.644165  103439 command_runner.go:130] > # allowed_devices = [
	I1002 20:48:27.644168  103439 command_runner.go:130] > # 	"/dev/fuse",
	I1002 20:48:27.644172  103439 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 20:48:27.644177  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644181  103439 command_runner.go:130] > # List of additional devices. specified as
	I1002 20:48:27.644194  103439 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 20:48:27.644201  103439 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 20:48:27.644207  103439 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:48:27.644210  103439 command_runner.go:130] > # additional_devices = [
	I1002 20:48:27.644213  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644218  103439 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 20:48:27.644224  103439 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 20:48:27.644227  103439 command_runner.go:130] > # 	"/etc/cdi",
	I1002 20:48:27.644231  103439 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 20:48:27.644235  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644241  103439 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 20:48:27.644249  103439 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 20:48:27.644253  103439 command_runner.go:130] > # Defaults to false.
	I1002 20:48:27.644259  103439 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 20:48:27.644265  103439 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 20:48:27.644272  103439 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 20:48:27.644275  103439 command_runner.go:130] > # hooks_dir = [
	I1002 20:48:27.644280  103439 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 20:48:27.644283  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644289  103439 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 20:48:27.644297  103439 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 20:48:27.644302  103439 command_runner.go:130] > # its default mounts from the following two files:
	I1002 20:48:27.644305  103439 command_runner.go:130] > #
	I1002 20:48:27.644310  103439 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 20:48:27.644323  103439 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 20:48:27.644329  103439 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 20:48:27.644334  103439 command_runner.go:130] > #
	I1002 20:48:27.644340  103439 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 20:48:27.644346  103439 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 20:48:27.644352  103439 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 20:48:27.644356  103439 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 20:48:27.644359  103439 command_runner.go:130] > #
	I1002 20:48:27.644363  103439 command_runner.go:130] > # default_mounts_file = ""
	I1002 20:48:27.644377  103439 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 20:48:27.644385  103439 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 20:48:27.644389  103439 command_runner.go:130] > # pids_limit = -1
	I1002 20:48:27.644397  103439 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 20:48:27.644403  103439 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 20:48:27.644409  103439 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 20:48:27.644418  103439 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 20:48:27.644422  103439 command_runner.go:130] > # log_size_max = -1
	I1002 20:48:27.644430  103439 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 20:48:27.644434  103439 command_runner.go:130] > # log_to_journald = false
	I1002 20:48:27.644439  103439 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 20:48:27.644444  103439 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 20:48:27.644450  103439 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 20:48:27.644454  103439 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 20:48:27.644461  103439 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 20:48:27.644465  103439 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 20:48:27.644470  103439 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 20:48:27.644473  103439 command_runner.go:130] > # read_only = false
	I1002 20:48:27.644482  103439 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 20:48:27.644490  103439 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 20:48:27.644494  103439 command_runner.go:130] > # live configuration reload.
	I1002 20:48:27.644500  103439 command_runner.go:130] > # log_level = "info"
	I1002 20:48:27.644505  103439 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 20:48:27.644509  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.644512  103439 command_runner.go:130] > # log_filter = ""
	I1002 20:48:27.644518  103439 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 20:48:27.644525  103439 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 20:48:27.644529  103439 command_runner.go:130] > # separated by comma.
	I1002 20:48:27.644536  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644542  103439 command_runner.go:130] > # uid_mappings = ""
	I1002 20:48:27.644547  103439 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 20:48:27.644552  103439 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 20:48:27.644559  103439 command_runner.go:130] > # separated by comma.
	I1002 20:48:27.644573  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644579  103439 command_runner.go:130] > # gid_mappings = ""
	I1002 20:48:27.644585  103439 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 20:48:27.644591  103439 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:48:27.644598  103439 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:48:27.644606  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644611  103439 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 20:48:27.644617  103439 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 20:48:27.644625  103439 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:48:27.644631  103439 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:48:27.644640  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644644  103439 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 20:48:27.644652  103439 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 20:48:27.644657  103439 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 20:48:27.644665  103439 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 20:48:27.644668  103439 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 20:48:27.644673  103439 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 20:48:27.644679  103439 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 20:48:27.644686  103439 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 20:48:27.644690  103439 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 20:48:27.644693  103439 command_runner.go:130] > # drop_infra_ctr = true
	I1002 20:48:27.644699  103439 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 20:48:27.644706  103439 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 20:48:27.644712  103439 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 20:48:27.644718  103439 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 20:48:27.644726  103439 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 20:48:27.644733  103439 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 20:48:27.644752  103439 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 20:48:27.644764  103439 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 20:48:27.644769  103439 command_runner.go:130] > # shared_cpuset = ""
	I1002 20:48:27.644777  103439 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 20:48:27.644782  103439 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 20:48:27.644785  103439 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 20:48:27.644798  103439 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 20:48:27.644804  103439 command_runner.go:130] > # pinns_path = ""
	I1002 20:48:27.644810  103439 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 20:48:27.644817  103439 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 20:48:27.644821  103439 command_runner.go:130] > # enable_criu_support = true
	I1002 20:48:27.644826  103439 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 20:48:27.644831  103439 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 20:48:27.644837  103439 command_runner.go:130] > # enable_pod_events = false
	I1002 20:48:27.644842  103439 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 20:48:27.644849  103439 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 20:48:27.644853  103439 command_runner.go:130] > # default_runtime = "crun"
	I1002 20:48:27.644858  103439 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 20:48:27.644867  103439 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 20:48:27.644876  103439 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 20:48:27.644882  103439 command_runner.go:130] > # creation as a file is not desired either.
	I1002 20:48:27.644890  103439 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 20:48:27.644896  103439 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 20:48:27.644900  103439 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 20:48:27.644905  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644911  103439 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 20:48:27.644919  103439 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 20:48:27.644925  103439 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 20:48:27.644930  103439 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 20:48:27.644932  103439 command_runner.go:130] > #
	I1002 20:48:27.644937  103439 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 20:48:27.644943  103439 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 20:48:27.644947  103439 command_runner.go:130] > # runtime_type = "oci"
	I1002 20:48:27.644951  103439 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 20:48:27.644955  103439 command_runner.go:130] > # inherit_default_runtime = false
	I1002 20:48:27.644959  103439 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 20:48:27.644963  103439 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 20:48:27.644968  103439 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 20:48:27.644972  103439 command_runner.go:130] > # monitor_env = []
	I1002 20:48:27.644980  103439 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 20:48:27.644987  103439 command_runner.go:130] > # allowed_annotations = []
	I1002 20:48:27.644992  103439 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 20:48:27.644998  103439 command_runner.go:130] > # no_sync_log = false
	I1002 20:48:27.645001  103439 command_runner.go:130] > # default_annotations = {}
	I1002 20:48:27.645007  103439 command_runner.go:130] > # stream_websockets = false
	I1002 20:48:27.645011  103439 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:48:27.645086  103439 command_runner.go:130] > # Where:
	I1002 20:48:27.645099  103439 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 20:48:27.645104  103439 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 20:48:27.645110  103439 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 20:48:27.645115  103439 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 20:48:27.645119  103439 command_runner.go:130] > #   in $PATH.
	I1002 20:48:27.645124  103439 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 20:48:27.645131  103439 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 20:48:27.645137  103439 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 20:48:27.645142  103439 command_runner.go:130] > #   state.
	I1002 20:48:27.645148  103439 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 20:48:27.645156  103439 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 20:48:27.645161  103439 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 20:48:27.645173  103439 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 20:48:27.645180  103439 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 20:48:27.645186  103439 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 20:48:27.645191  103439 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 20:48:27.645197  103439 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 20:48:27.645205  103439 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 20:48:27.645216  103439 command_runner.go:130] > #   The currently recognized values are:
	I1002 20:48:27.645224  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 20:48:27.645231  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 20:48:27.645239  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 20:48:27.645245  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 20:48:27.645254  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 20:48:27.645259  103439 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 20:48:27.645276  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 20:48:27.645284  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 20:48:27.645296  103439 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 20:48:27.645301  103439 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 20:48:27.645309  103439 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 20:48:27.645320  103439 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 20:48:27.645327  103439 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 20:48:27.645333  103439 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 20:48:27.645341  103439 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 20:48:27.645348  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 20:48:27.645355  103439 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 20:48:27.645360  103439 command_runner.go:130] > #   deprecated option "conmon".
	I1002 20:48:27.645368  103439 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 20:48:27.645373  103439 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 20:48:27.645381  103439 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 20:48:27.645385  103439 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 20:48:27.645392  103439 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 20:48:27.645398  103439 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 20:48:27.645405  103439 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 20:48:27.645410  103439 command_runner.go:130] > #   conmon-rs by using:
	I1002 20:48:27.645417  103439 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 20:48:27.645426  103439 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 20:48:27.645433  103439 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 20:48:27.645441  103439 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 20:48:27.645446  103439 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 20:48:27.645454  103439 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 20:48:27.645461  103439 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 20:48:27.645468  103439 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 20:48:27.645475  103439 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 20:48:27.645484  103439 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 20:48:27.645490  103439 command_runner.go:130] > #   when a machine crash happens.
	I1002 20:48:27.645496  103439 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 20:48:27.645505  103439 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 20:48:27.645517  103439 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 20:48:27.645523  103439 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 20:48:27.645529  103439 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 20:48:27.645542  103439 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 20:48:27.645548  103439 command_runner.go:130] > #
	I1002 20:48:27.645552  103439 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 20:48:27.645555  103439 command_runner.go:130] > #
	I1002 20:48:27.645560  103439 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 20:48:27.645569  103439 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 20:48:27.645573  103439 command_runner.go:130] > #
	I1002 20:48:27.645578  103439 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 20:48:27.645586  103439 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 20:48:27.645589  103439 command_runner.go:130] > #
	I1002 20:48:27.645595  103439 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 20:48:27.645598  103439 command_runner.go:130] > # feature.
	I1002 20:48:27.645601  103439 command_runner.go:130] > #
	I1002 20:48:27.645606  103439 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 20:48:27.645615  103439 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 20:48:27.645622  103439 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 20:48:27.645627  103439 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 20:48:27.645635  103439 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 20:48:27.645637  103439 command_runner.go:130] > #
	I1002 20:48:27.645643  103439 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 20:48:27.645651  103439 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 20:48:27.645653  103439 command_runner.go:130] > #
	I1002 20:48:27.645662  103439 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 20:48:27.645672  103439 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 20:48:27.645676  103439 command_runner.go:130] > #
	I1002 20:48:27.645682  103439 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 20:48:27.645690  103439 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 20:48:27.645693  103439 command_runner.go:130] > # limitation.
	I1002 20:48:27.645697  103439 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 20:48:27.645701  103439 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 20:48:27.645709  103439 command_runner.go:130] > runtime_type = ""
	I1002 20:48:27.645715  103439 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 20:48:27.645725  103439 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:48:27.645731  103439 command_runner.go:130] > runtime_config_path = ""
	I1002 20:48:27.645746  103439 command_runner.go:130] > container_min_memory = ""
	I1002 20:48:27.645754  103439 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:48:27.645762  103439 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:48:27.645768  103439 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:48:27.645777  103439 command_runner.go:130] > allowed_annotations = [
	I1002 20:48:27.645783  103439 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 20:48:27.645788  103439 command_runner.go:130] > ]
	I1002 20:48:27.645792  103439 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:48:27.645796  103439 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 20:48:27.645803  103439 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 20:48:27.645807  103439 command_runner.go:130] > runtime_type = ""
	I1002 20:48:27.645811  103439 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 20:48:27.645815  103439 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:48:27.645818  103439 command_runner.go:130] > runtime_config_path = ""
	I1002 20:48:27.645822  103439 command_runner.go:130] > container_min_memory = ""
	I1002 20:48:27.645826  103439 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:48:27.645830  103439 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:48:27.645834  103439 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:48:27.645838  103439 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:48:27.645844  103439 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 20:48:27.645852  103439 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 20:48:27.645857  103439 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 20:48:27.645866  103439 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 20:48:27.645875  103439 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 20:48:27.645886  103439 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 20:48:27.645894  103439 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 20:48:27.645899  103439 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 20:48:27.645907  103439 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 20:48:27.645917  103439 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 20:48:27.645930  103439 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 20:48:27.645940  103439 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 20:48:27.645943  103439 command_runner.go:130] > # Example:
	I1002 20:48:27.645949  103439 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 20:48:27.645953  103439 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 20:48:27.645960  103439 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 20:48:27.645966  103439 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 20:48:27.645972  103439 command_runner.go:130] > # cpuset = "0-1"
	I1002 20:48:27.645975  103439 command_runner.go:130] > # cpushares = "5"
	I1002 20:48:27.645979  103439 command_runner.go:130] > # cpuquota = "1000"
	I1002 20:48:27.645982  103439 command_runner.go:130] > # cpuperiod = "100000"
	I1002 20:48:27.645986  103439 command_runner.go:130] > # cpulimit = "35"
	I1002 20:48:27.645989  103439 command_runner.go:130] > # Where:
	I1002 20:48:27.645993  103439 command_runner.go:130] > # The workload name is workload-type.
	I1002 20:48:27.646000  103439 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 20:48:27.646006  103439 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 20:48:27.646011  103439 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 20:48:27.646021  103439 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 20:48:27.646026  103439 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 20:48:27.646034  103439 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 20:48:27.646044  103439 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 20:48:27.646052  103439 command_runner.go:130] > # Default value is set to true
	I1002 20:48:27.646058  103439 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 20:48:27.646068  103439 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 20:48:27.646074  103439 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 20:48:27.646083  103439 command_runner.go:130] > # Default value is set to 'false'
	I1002 20:48:27.646092  103439 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 20:48:27.646104  103439 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 20:48:27.646118  103439 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 20:48:27.646127  103439 command_runner.go:130] > # timezone = ""
	I1002 20:48:27.646136  103439 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 20:48:27.646144  103439 command_runner.go:130] > #
	I1002 20:48:27.646158  103439 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 20:48:27.646179  103439 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 20:48:27.646188  103439 command_runner.go:130] > [crio.image]
	I1002 20:48:27.646201  103439 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 20:48:27.646209  103439 command_runner.go:130] > # default_transport = "docker://"
	I1002 20:48:27.646217  103439 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 20:48:27.646225  103439 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:48:27.646229  103439 command_runner.go:130] > # global_auth_file = ""
	I1002 20:48:27.646236  103439 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 20:48:27.646241  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.646248  103439 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.646254  103439 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 20:48:27.646260  103439 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:48:27.646265  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.646271  103439 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 20:48:27.646276  103439 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 20:48:27.646281  103439 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 20:48:27.646289  103439 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 20:48:27.646295  103439 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 20:48:27.646301  103439 command_runner.go:130] > # pause_command = "/pause"
	I1002 20:48:27.646306  103439 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 20:48:27.646316  103439 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 20:48:27.646323  103439 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 20:48:27.646329  103439 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 20:48:27.646336  103439 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 20:48:27.646342  103439 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 20:48:27.646345  103439 command_runner.go:130] > # pinned_images = [
	I1002 20:48:27.646348  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646354  103439 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 20:48:27.646362  103439 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 20:48:27.646368  103439 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 20:48:27.646376  103439 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 20:48:27.646381  103439 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 20:48:27.646386  103439 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 20:48:27.646399  103439 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 20:48:27.646411  103439 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 20:48:27.646423  103439 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 20:48:27.646436  103439 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 20:48:27.646447  103439 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 20:48:27.646458  103439 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 20:48:27.646470  103439 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 20:48:27.646480  103439 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 20:48:27.646486  103439 command_runner.go:130] > # changing them here.
	I1002 20:48:27.646491  103439 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 20:48:27.646497  103439 command_runner.go:130] > # insecure_registries = [
	I1002 20:48:27.646500  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646507  103439 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 20:48:27.646516  103439 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 20:48:27.646522  103439 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 20:48:27.646527  103439 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 20:48:27.646531  103439 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 20:48:27.646538  103439 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 20:48:27.646544  103439 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 20:48:27.646551  103439 command_runner.go:130] > # auto_reload_registries = false
	I1002 20:48:27.646557  103439 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 20:48:27.646571  103439 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 20:48:27.646579  103439 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 20:48:27.646583  103439 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 20:48:27.646590  103439 command_runner.go:130] > # The mode of short name resolution.
	I1002 20:48:27.646596  103439 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 20:48:27.646605  103439 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 20:48:27.646611  103439 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 20:48:27.646615  103439 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 20:48:27.646620  103439 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 20:48:27.646628  103439 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 20:48:27.646632  103439 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 20:48:27.646638  103439 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 20:48:27.646649  103439 command_runner.go:130] > # CNI plugins.
	I1002 20:48:27.646655  103439 command_runner.go:130] > [crio.network]
	I1002 20:48:27.646660  103439 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 20:48:27.646667  103439 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 20:48:27.646671  103439 command_runner.go:130] > # cni_default_network = ""
	I1002 20:48:27.646678  103439 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 20:48:27.646682  103439 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 20:48:27.646690  103439 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 20:48:27.646693  103439 command_runner.go:130] > # plugin_dirs = [
	I1002 20:48:27.646696  103439 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 20:48:27.646699  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646703  103439 command_runner.go:130] > # List of included pod metrics.
	I1002 20:48:27.646709  103439 command_runner.go:130] > # included_pod_metrics = [
	I1002 20:48:27.646711  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646716  103439 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 20:48:27.646722  103439 command_runner.go:130] > [crio.metrics]
	I1002 20:48:27.646726  103439 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 20:48:27.646732  103439 command_runner.go:130] > # enable_metrics = false
	I1002 20:48:27.646752  103439 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 20:48:27.646761  103439 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 20:48:27.646767  103439 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 20:48:27.646775  103439 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 20:48:27.646783  103439 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 20:48:27.646787  103439 command_runner.go:130] > # metrics_collectors = [
	I1002 20:48:27.646793  103439 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 20:48:27.646797  103439 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 20:48:27.646800  103439 command_runner.go:130] > # 	"containers_oom_total",
	I1002 20:48:27.646804  103439 command_runner.go:130] > # 	"processes_defunct",
	I1002 20:48:27.646807  103439 command_runner.go:130] > # 	"operations_total",
	I1002 20:48:27.646811  103439 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 20:48:27.646815  103439 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 20:48:27.646818  103439 command_runner.go:130] > # 	"operations_errors_total",
	I1002 20:48:27.646822  103439 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 20:48:27.646831  103439 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 20:48:27.646835  103439 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 20:48:27.646839  103439 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 20:48:27.646842  103439 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 20:48:27.646846  103439 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 20:48:27.646850  103439 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 20:48:27.646853  103439 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 20:48:27.646857  103439 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 20:48:27.646860  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646868  103439 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 20:48:27.646874  103439 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 20:48:27.646880  103439 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 20:48:27.646886  103439 command_runner.go:130] > # metrics_port = 9090
	I1002 20:48:27.646891  103439 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 20:48:27.646901  103439 command_runner.go:130] > # metrics_socket = ""
	I1002 20:48:27.646909  103439 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 20:48:27.646914  103439 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 20:48:27.646922  103439 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 20:48:27.646928  103439 command_runner.go:130] > # certificate on any modification event.
	I1002 20:48:27.646932  103439 command_runner.go:130] > # metrics_cert = ""
	I1002 20:48:27.646939  103439 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 20:48:27.646943  103439 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 20:48:27.646949  103439 command_runner.go:130] > # metrics_key = ""
	I1002 20:48:27.646954  103439 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 20:48:27.646960  103439 command_runner.go:130] > [crio.tracing]
	I1002 20:48:27.646966  103439 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 20:48:27.646971  103439 command_runner.go:130] > # enable_tracing = false
	I1002 20:48:27.646977  103439 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 20:48:27.646983  103439 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 20:48:27.646993  103439 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 20:48:27.646999  103439 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 20:48:27.647003  103439 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 20:48:27.647009  103439 command_runner.go:130] > [crio.nri]
	I1002 20:48:27.647017  103439 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 20:48:27.647023  103439 command_runner.go:130] > # enable_nri = true
	I1002 20:48:27.647032  103439 command_runner.go:130] > # NRI socket to listen on.
	I1002 20:48:27.647038  103439 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 20:48:27.647042  103439 command_runner.go:130] > # NRI plugin directory to use.
	I1002 20:48:27.647049  103439 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 20:48:27.647053  103439 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 20:48:27.647060  103439 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 20:48:27.647065  103439 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 20:48:27.647584  103439 command_runner.go:130] > # nri_disable_connections = false
	I1002 20:48:27.647654  103439 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 20:48:27.647663  103439 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 20:48:27.647672  103439 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 20:48:27.647686  103439 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 20:48:27.647693  103439 command_runner.go:130] > # NRI default validator configuration.
	I1002 20:48:27.647707  103439 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 20:48:27.647731  103439 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 20:48:27.647757  103439 command_runner.go:130] > # can be restricted/rejected:
	I1002 20:48:27.647770  103439 command_runner.go:130] > # - OCI hook injection
	I1002 20:48:27.647779  103439 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 20:48:27.647792  103439 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 20:48:27.647798  103439 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 20:48:27.647805  103439 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 20:48:27.647819  103439 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 20:48:27.647828  103439 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 20:48:27.647837  103439 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 20:48:27.647841  103439 command_runner.go:130] > #
	I1002 20:48:27.647853  103439 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 20:48:27.647859  103439 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 20:48:27.647866  103439 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 20:48:27.647883  103439 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 20:48:27.647891  103439 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 20:48:27.647898  103439 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 20:48:27.647906  103439 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 20:48:27.647916  103439 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 20:48:27.647921  103439 command_runner.go:130] > # ]
	I1002 20:48:27.647929  103439 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 20:48:27.647939  103439 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 20:48:27.647949  103439 command_runner.go:130] > [crio.stats]
	I1002 20:48:27.647958  103439 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 20:48:27.647966  103439 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 20:48:27.647973  103439 command_runner.go:130] > # stats_collection_period = 0
	I1002 20:48:27.647994  103439 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 20:48:27.648004  103439 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 20:48:27.648009  103439 command_runner.go:130] > # collection_period = 0
	I1002 20:48:27.648051  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627189517Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 20:48:27.648070  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627217069Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 20:48:27.648087  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627236914Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 20:48:27.648106  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627255188Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 20:48:27.648122  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.62731995Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.648141  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627489035Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 20:48:27.648161  103439 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 20:48:27.648318  103439 cni.go:84] Creating CNI manager for ""
	I1002 20:48:27.648331  103439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:48:27.648354  103439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:48:27.648401  103439 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-012915 NodeName:functional-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:48:27.648942  103439 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-012915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:48:27.649009  103439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:48:27.657181  103439 command_runner.go:130] > kubeadm
	I1002 20:48:27.657198  103439 command_runner.go:130] > kubectl
	I1002 20:48:27.657203  103439 command_runner.go:130] > kubelet
	I1002 20:48:27.657948  103439 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:48:27.658013  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:48:27.665603  103439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:48:27.678534  103439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:48:27.691111  103439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:48:27.703366  103439 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:48:27.707046  103439 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 20:48:27.707133  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:27.791376  103439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:48:27.804011  103439 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915 for IP: 192.168.49.2
	I1002 20:48:27.804040  103439 certs.go:195] generating shared ca certs ...
	I1002 20:48:27.804056  103439 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:27.804180  103439 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 20:48:27.804232  103439 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 20:48:27.804241  103439 certs.go:257] generating profile certs ...
	I1002 20:48:27.804334  103439 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key
	I1002 20:48:27.804375  103439 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645
	I1002 20:48:27.804412  103439 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key
	I1002 20:48:27.804424  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:48:27.804435  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:48:27.804453  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:48:27.804469  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:48:27.804481  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:48:27.804494  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:48:27.804506  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:48:27.804518  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:48:27.804560  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 20:48:27.804591  103439 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 20:48:27.804601  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:48:27.804623  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:48:27.804645  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:48:27.804666  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 20:48:27.804704  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:48:27.804729  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 20:48:27.804763  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:27.804780  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 20:48:27.805294  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:48:27.822974  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:48:27.840455  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:48:27.858368  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:48:27.877146  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:48:27.895282  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:48:27.912487  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:48:27.929452  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:48:27.947144  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 20:48:27.964177  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:48:27.981785  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 20:48:27.999006  103439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:48:28.011646  103439 ssh_runner.go:195] Run: openssl version
	I1002 20:48:28.017389  103439 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 20:48:28.017621  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 20:48:28.025902  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029403  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029446  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029489  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.063085  103439 command_runner.go:130] > 3ec20f2e
	I1002 20:48:28.063182  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:48:28.071431  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:48:28.080075  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083770  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083829  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083901  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.117894  103439 command_runner.go:130] > b5213941
	I1002 20:48:28.117982  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:48:28.126480  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 20:48:28.135075  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138711  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138759  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138809  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.172582  103439 command_runner.go:130] > 51391683
	I1002 20:48:28.172931  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 20:48:28.180914  103439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:48:28.184555  103439 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:48:28.184579  103439 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 20:48:28.184588  103439 command_runner.go:130] > Device: 8,1	Inode: 811435      Links: 1
	I1002 20:48:28.184598  103439 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:48:28.184608  103439 command_runner.go:130] > Access: 2025-10-02 20:44:21.070069799 +0000
	I1002 20:48:28.184616  103439 command_runner.go:130] > Modify: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184623  103439 command_runner.go:130] > Change: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184628  103439 command_runner.go:130] >  Birth: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184684  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:48:28.218476  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.218920  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:48:28.253813  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.254026  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:48:28.288477  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.288852  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:48:28.322969  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.323293  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:48:28.357073  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.357354  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:48:28.390854  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.391133  103439 kubeadm.go:400] StartCluster: {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:28.391217  103439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:48:28.391280  103439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:48:28.420217  103439 cri.go:89] found id: ""
	I1002 20:48:28.420280  103439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:48:28.427672  103439 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 20:48:28.427700  103439 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 20:48:28.427710  103439 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 20:48:28.428396  103439 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:48:28.428413  103439 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:48:28.428455  103439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:48:28.435936  103439 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:48:28.436039  103439 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-012915" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.436106  103439 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "functional-012915" cluster setting kubeconfig missing "functional-012915" context setting]
	I1002 20:48:28.436458  103439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.437072  103439 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.437245  103439 kapi.go:59] client config for functional-012915: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:48:28.437717  103439 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:48:28.437732  103439 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:48:28.437753  103439 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:48:28.437760  103439 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:48:28.437765  103439 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:48:28.437782  103439 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:48:28.438160  103439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:48:28.446094  103439 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:48:28.446137  103439 kubeadm.go:601] duration metric: took 17.717766ms to restartPrimaryControlPlane
	I1002 20:48:28.446149  103439 kubeadm.go:402] duration metric: took 55.025148ms to StartCluster
	I1002 20:48:28.446168  103439 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.446285  103439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.447035  103439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.447291  103439 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:48:28.447487  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:28.447429  103439 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:48:28.447531  103439 addons.go:69] Setting storage-provisioner=true in profile "functional-012915"
	I1002 20:48:28.447538  103439 addons.go:69] Setting default-storageclass=true in profile "functional-012915"
	I1002 20:48:28.447553  103439 addons.go:238] Setting addon storage-provisioner=true in "functional-012915"
	I1002 20:48:28.447556  103439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-012915"
	I1002 20:48:28.447587  103439 host.go:66] Checking if "functional-012915" exists ...
	I1002 20:48:28.447847  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.447963  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.456904  103439 out.go:179] * Verifying Kubernetes components...
	I1002 20:48:28.458283  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:28.468928  103439 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.469101  103439 kapi.go:59] client config for functional-012915: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:48:28.469369  103439 addons.go:238] Setting addon default-storageclass=true in "functional-012915"
	I1002 20:48:28.469428  103439 host.go:66] Checking if "functional-012915" exists ...
	I1002 20:48:28.469783  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.469862  103439 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:48:28.471474  103439 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:28.471499  103439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:48:28.471557  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:28.496201  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:28.497174  103439 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.497196  103439 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:48:28.497262  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:28.518487  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:28.562123  103439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:48:28.575162  103439 node_ready.go:35] waiting up to 6m0s for node "functional-012915" to be "Ready" ...
	I1002 20:48:28.575316  103439 type.go:168] "Request Body" body=""
	I1002 20:48:28.575388  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:28.575672  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:28.608117  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:28.625656  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.661232  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.663490  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.663556  103439 retry.go:31] will retry after 361.771557ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.679351  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.679399  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.679416  103439 retry.go:31] will retry after 152.242547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.831815  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.883542  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.883591  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.883623  103439 retry.go:31] will retry after 207.681653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.025956  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.075113  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.076262  103439 type.go:168] "Request Body" body=""
	I1002 20:48:29.076342  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:29.076623  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:29.077506  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.077533  103439 retry.go:31] will retry after 323.914971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.091861  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:29.140394  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.142831  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.142876  103439 retry.go:31] will retry after 594.351303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.402253  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.454867  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.454924  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.454957  103439 retry.go:31] will retry after 314.476021ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.576263  103439 type.go:168] "Request Body" body=""
	I1002 20:48:29.576411  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:29.576803  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:29.738004  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:29.769756  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.788694  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.790987  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.791025  103439 retry.go:31] will retry after 1.197724944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.822453  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.822502  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.822528  103439 retry.go:31] will retry after 662.931836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.075955  103439 type.go:168] "Request Body" body=""
	I1002 20:48:30.076032  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:30.076409  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:30.485957  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:30.538516  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:30.538557  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.538578  103439 retry.go:31] will retry after 1.629504367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.575804  103439 type.go:168] "Request Body" body=""
	I1002 20:48:30.575880  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:30.576213  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:30.576271  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:30.989890  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:31.043558  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:31.043619  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.043637  103439 retry.go:31] will retry after 801.444903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.075880  103439 type.go:168] "Request Body" body=""
	I1002 20:48:31.075960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:31.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:31.576114  103439 type.go:168] "Request Body" body=""
	I1002 20:48:31.576220  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:31.576603  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:31.845951  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:31.899339  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:31.899391  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.899410  103439 retry.go:31] will retry after 2.181457366s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.075827  103439 type.go:168] "Request Body" body=""
	I1002 20:48:32.075931  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:32.076334  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:32.168648  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:32.220495  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:32.220539  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.220557  103439 retry.go:31] will retry after 1.373851602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.576076  103439 type.go:168] "Request Body" body=""
	I1002 20:48:32.576161  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:32.576533  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:32.576599  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:33.076393  103439 type.go:168] "Request Body" body=""
	I1002 20:48:33.076488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:33.076861  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:33.575655  103439 type.go:168] "Request Body" body=""
	I1002 20:48:33.575875  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:33.576337  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:33.595591  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:33.646012  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:33.648297  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:33.648332  103439 retry.go:31] will retry after 3.090030694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.075896  103439 type.go:168] "Request Body" body=""
	I1002 20:48:34.075981  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:34.076263  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:34.081465  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:34.133647  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:34.133724  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.133770  103439 retry.go:31] will retry after 3.497111827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.576313  103439 type.go:168] "Request Body" body=""
	I1002 20:48:34.576409  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:34.576832  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:34.576893  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:35.075636  103439 type.go:168] "Request Body" body=""
	I1002 20:48:35.075732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:35.076135  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:35.575728  103439 type.go:168] "Request Body" body=""
	I1002 20:48:35.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:35.576239  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.076110  103439 type.go:168] "Request Body" body=""
	I1002 20:48:36.076196  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:36.076574  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.575482  103439 type.go:168] "Request Body" body=""
	I1002 20:48:36.575578  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:36.575974  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.739297  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:36.791716  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:36.791786  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:36.791808  103439 retry.go:31] will retry after 4.619526112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:37.076288  103439 type.go:168] "Request Body" body=""
	I1002 20:48:37.076368  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:37.076721  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:37.076814  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:37.576414  103439 type.go:168] "Request Body" body=""
	I1002 20:48:37.576492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:37.576867  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:37.632068  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:37.685537  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:37.685582  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:37.685612  103439 retry.go:31] will retry after 3.179037423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:38.076157  103439 type.go:168] "Request Body" body=""
	I1002 20:48:38.076230  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:38.076633  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:38.576327  103439 type.go:168] "Request Body" body=""
	I1002 20:48:38.576425  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:38.576797  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:39.075409  103439 type.go:168] "Request Body" body=""
	I1002 20:48:39.075492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:39.075858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:39.575455  103439 type.go:168] "Request Body" body=""
	I1002 20:48:39.575567  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:39.575934  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:39.576000  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:40.075790  103439 type.go:168] "Request Body" body=""
	I1002 20:48:40.075873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:40.076280  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:40.575900  103439 type.go:168] "Request Body" body=""
	I1002 20:48:40.575982  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:40.576339  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:40.865793  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:40.922102  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:40.922154  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:40.922173  103439 retry.go:31] will retry after 8.017978865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.075452  103439 type.go:168] "Request Body" body=""
	I1002 20:48:41.075541  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:41.075959  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:41.412402  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:41.462892  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:41.465283  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.465317  103439 retry.go:31] will retry after 6.722422885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.575519  103439 type.go:168] "Request Body" body=""
	I1002 20:48:41.575606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:41.575978  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:41.576042  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:42.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:48:42.075773  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:42.076256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:42.575731  103439 type.go:168] "Request Body" body=""
	I1002 20:48:42.575835  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:42.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:43.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:48:43.076025  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:43.076442  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:43.576156  103439 type.go:168] "Request Body" body=""
	I1002 20:48:43.576250  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:43.576635  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:43.576711  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:44.076306  103439 type.go:168] "Request Body" body=""
	I1002 20:48:44.076398  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:44.076835  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:44.575484  103439 type.go:168] "Request Body" body=""
	I1002 20:48:44.575566  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:44.575930  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:45.075679  103439 type.go:168] "Request Body" body=""
	I1002 20:48:45.075780  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:45.076197  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:45.575843  103439 type.go:168] "Request Body" body=""
	I1002 20:48:45.575922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:45.576287  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:46.075882  103439 type.go:168] "Request Body" body=""
	I1002 20:48:46.075956  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:46.076307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:46.076367  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:46.576093  103439 type.go:168] "Request Body" body=""
	I1002 20:48:46.576194  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:46.576549  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:47.076247  103439 type.go:168] "Request Body" body=""
	I1002 20:48:47.076328  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:47.076667  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:47.576364  103439 type.go:168] "Request Body" body=""
	I1002 20:48:47.576474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:47.576869  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:48.075470  103439 type.go:168] "Request Body" body=""
	I1002 20:48:48.075556  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:48.075935  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:48.188198  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:48.240819  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:48.240876  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.240960  103439 retry.go:31] will retry after 5.203774684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.575470  103439 type.go:168] "Request Body" body=""
	I1002 20:48:48.575548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:48.575916  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:48.575985  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:48.940390  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:48.992334  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:48.994935  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.994965  103439 retry.go:31] will retry after 7.700365391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:49.076327  103439 type.go:168] "Request Body" body=""
	I1002 20:48:49.076416  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:49.076830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:49.575454  103439 type.go:168] "Request Body" body=""
	I1002 20:48:49.575554  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:49.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:50.075711  103439 type.go:168] "Request Body" body=""
	I1002 20:48:50.075826  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:50.076249  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:50.575864  103439 type.go:168] "Request Body" body=""
	I1002 20:48:50.575961  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:50.576351  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:50.576415  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:51.076075  103439 type.go:168] "Request Body" body=""
	I1002 20:48:51.076176  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:51.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:51.575972  103439 type.go:168] "Request Body" body=""
	I1002 20:48:51.576054  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:51.576387  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:52.076055  103439 type.go:168] "Request Body" body=""
	I1002 20:48:52.076146  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:52.076526  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:52.576203  103439 type.go:168] "Request Body" body=""
	I1002 20:48:52.576289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:52.576688  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:52.576771  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:53.076363  103439 type.go:168] "Request Body" body=""
	I1002 20:48:53.076444  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:53.076831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:53.445247  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:53.496043  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:53.498518  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:53.498561  103439 retry.go:31] will retry after 18.668445084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:53.575895  103439 type.go:168] "Request Body" body=""
	I1002 20:48:53.575974  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:53.576330  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:54.076074  103439 type.go:168] "Request Body" body=""
	I1002 20:48:54.076158  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:54.076568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:54.576230  103439 type.go:168] "Request Body" body=""
	I1002 20:48:54.576305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:54.576631  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:55.075724  103439 type.go:168] "Request Body" body=""
	I1002 20:48:55.075820  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:55.076207  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:55.076287  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:55.575835  103439 type.go:168] "Request Body" body=""
	I1002 20:48:55.575924  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:55.576280  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.075883  103439 type.go:168] "Request Body" body=""
	I1002 20:48:56.075963  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:56.076361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.576037  103439 type.go:168] "Request Body" body=""
	I1002 20:48:56.576120  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:56.576513  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.695837  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:56.749495  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:56.749534  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:56.749553  103439 retry.go:31] will retry after 17.757887541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:57.076066  103439 type.go:168] "Request Body" body=""
	I1002 20:48:57.076153  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:57.076611  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:57.076679  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:57.576325  103439 type.go:168] "Request Body" body=""
	I1002 20:48:57.576416  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:57.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:58.076237  103439 type.go:168] "Request Body" body=""
	I1002 20:48:58.076314  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:58.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:58.575412  103439 type.go:168] "Request Body" body=""
	I1002 20:48:58.575504  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:58.575865  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:59.075437  103439 type.go:168] "Request Body" body=""
	I1002 20:48:59.075528  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:59.075976  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:59.575438  103439 type.go:168] "Request Body" body=""
	I1002 20:48:59.575539  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:59.575952  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:59.576014  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:00.075849  103439 type.go:168] "Request Body" body=""
	I1002 20:49:00.075928  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:00.076266  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:00.575974  103439 type.go:168] "Request Body" body=""
	I1002 20:49:00.576072  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:00.576461  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:01.076180  103439 type.go:168] "Request Body" body=""
	I1002 20:49:01.076280  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:01.076643  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:01.576370  103439 type.go:168] "Request Body" body=""
	I1002 20:49:01.576466  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:01.576896  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:01.576970  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:02.075515  103439 type.go:168] "Request Body" body=""
	I1002 20:49:02.075606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:02.075985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:02.575600  103439 type.go:168] "Request Body" body=""
	I1002 20:49:02.575686  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:02.576112  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:03.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:49:03.075769  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:03.076121  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:03.575712  103439 type.go:168] "Request Body" body=""
	I1002 20:49:03.575846  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:03.576202  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:04.075891  103439 type.go:168] "Request Body" body=""
	I1002 20:49:04.075970  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:04.076322  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:04.076381  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:04.576087  103439 type.go:168] "Request Body" body=""
	I1002 20:49:04.576249  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:04.576616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:05.075403  103439 type.go:168] "Request Body" body=""
	I1002 20:49:05.075481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:05.075839  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:05.575464  103439 type.go:168] "Request Body" body=""
	I1002 20:49:05.575572  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:05.575972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:06.075594  103439 type.go:168] "Request Body" body=""
	I1002 20:49:06.075677  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:06.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:06.575658  103439 type.go:168] "Request Body" body=""
	I1002 20:49:06.575767  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:06.576141  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:06.576200  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:07.075781  103439 type.go:168] "Request Body" body=""
	I1002 20:49:07.075865  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:07.076245  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:07.575885  103439 type.go:168] "Request Body" body=""
	I1002 20:49:07.575974  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:07.576361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:08.075998  103439 type.go:168] "Request Body" body=""
	I1002 20:49:08.076084  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:08.076429  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:08.576307  103439 type.go:168] "Request Body" body=""
	I1002 20:49:08.576413  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:08.576814  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:08.576876  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:09.075362  103439 type.go:168] "Request Body" body=""
	I1002 20:49:09.075437  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:09.075799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:09.575387  103439 type.go:168] "Request Body" body=""
	I1002 20:49:09.575482  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:09.575850  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:10.075783  103439 type.go:168] "Request Body" body=""
	I1002 20:49:10.075869  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:10.076249  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:10.575831  103439 type.go:168] "Request Body" body=""
	I1002 20:49:10.575935  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:10.576353  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:11.076044  103439 type.go:168] "Request Body" body=""
	I1002 20:49:11.076133  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:11.076599  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:11.076668  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:11.576237  103439 type.go:168] "Request Body" body=""
	I1002 20:49:11.576331  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:11.576683  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:12.076335  103439 type.go:168] "Request Body" body=""
	I1002 20:49:12.076430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:12.076838  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:12.168044  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:12.220925  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:12.220980  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:12.221004  103439 retry.go:31] will retry after 18.69466529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:12.575446  103439 type.go:168] "Request Body" body=""
	I1002 20:49:12.575535  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:12.575932  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:13.075529  103439 type.go:168] "Request Body" body=""
	I1002 20:49:13.075604  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:13.075957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:13.575562  103439 type.go:168] "Request Body" body=""
	I1002 20:49:13.575652  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:13.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:13.576135  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:14.075639  103439 type.go:168] "Request Body" body=""
	I1002 20:49:14.075761  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:14.076134  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:14.507714  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:49:14.560377  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:14.560441  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:14.560472  103439 retry.go:31] will retry after 29.222161527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:14.575630  103439 type.go:168] "Request Body" body=""
	I1002 20:49:14.575695  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:14.575976  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:15.075906  103439 type.go:168] "Request Body" body=""
	I1002 20:49:15.075982  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:15.076361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:15.575992  103439 type.go:168] "Request Body" body=""
	I1002 20:49:15.576071  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:15.576414  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:15.576474  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:16.076107  103439 type.go:168] "Request Body" body=""
	I1002 20:49:16.076212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:16.076649  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:16.576307  103439 type.go:168] "Request Body" body=""
	I1002 20:49:16.576391  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:16.576715  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:17.076322  103439 type.go:168] "Request Body" body=""
	I1002 20:49:17.076405  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:17.076824  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:17.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:49:17.575561  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:17.575924  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:18.076218  103439 type.go:168] "Request Body" body=""
	I1002 20:49:18.076306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:18.076654  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:18.076715  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:18.576306  103439 type.go:168] "Request Body" body=""
	I1002 20:49:18.576386  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:18.576768  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:19.075340  103439 type.go:168] "Request Body" body=""
	I1002 20:49:19.075428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:19.075806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:19.575441  103439 type.go:168] "Request Body" body=""
	I1002 20:49:19.575527  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:19.575944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:20.075821  103439 type.go:168] "Request Body" body=""
	I1002 20:49:20.075922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:20.076321  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:20.575880  103439 type.go:168] "Request Body" body=""
	I1002 20:49:20.575960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:20.576302  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:20.576377  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:21.075989  103439 type.go:168] "Request Body" body=""
	I1002 20:49:21.076074  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:21.076448  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:21.576110  103439 type.go:168] "Request Body" body=""
	I1002 20:49:21.576185  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:21.576542  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:22.076165  103439 type.go:168] "Request Body" body=""
	I1002 20:49:22.076244  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:22.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:22.576228  103439 type.go:168] "Request Body" body=""
	I1002 20:49:22.576309  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:22.576640  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:22.576699  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:23.076279  103439 type.go:168] "Request Body" body=""
	I1002 20:49:23.076364  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:23.076694  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:23.576332  103439 type.go:168] "Request Body" body=""
	I1002 20:49:23.576406  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:23.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:24.075380  103439 type.go:168] "Request Body" body=""
	I1002 20:49:24.075461  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:24.075821  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:24.575420  103439 type.go:168] "Request Body" body=""
	I1002 20:49:24.575507  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:24.575886  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:25.075625  103439 type.go:168] "Request Body" body=""
	I1002 20:49:25.075705  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:25.076135  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:25.076213  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:25.575710  103439 type.go:168] "Request Body" body=""
	I1002 20:49:25.575827  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:25.576189  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:26.075726  103439 type.go:168] "Request Body" body=""
	I1002 20:49:26.075816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:26.076175  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:26.575753  103439 type.go:168] "Request Body" body=""
	I1002 20:49:26.575829  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:26.576180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:27.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:49:27.075799  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:27.076197  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:27.076268  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:27.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:49:27.575897  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:27.576231  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:28.075845  103439 type.go:168] "Request Body" body=""
	I1002 20:49:28.075929  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:28.076311  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:28.576131  103439 type.go:168] "Request Body" body=""
	I1002 20:49:28.576205  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:28.576567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:29.076227  103439 type.go:168] "Request Body" body=""
	I1002 20:49:29.076317  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:29.076686  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:29.076777  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:29.576355  103439 type.go:168] "Request Body" body=""
	I1002 20:49:29.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:29.576786  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.075478  103439 type.go:168] "Request Body" body=""
	I1002 20:49:30.075569  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:30.075933  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.575478  103439 type.go:168] "Request Body" body=""
	I1002 20:49:30.575586  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:30.575938  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.916459  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:30.966432  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:30.968861  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:30.968901  103439 retry.go:31] will retry after 21.359119468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:31.076302  103439 type.go:168] "Request Body" body=""
	I1002 20:49:31.076392  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:31.076792  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:31.076872  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:31.575376  103439 type.go:168] "Request Body" body=""
	I1002 20:49:31.575450  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:31.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:32.075414  103439 type.go:168] "Request Body" body=""
	I1002 20:49:32.075517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:32.075902  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:32.575509  103439 type.go:168] "Request Body" body=""
	I1002 20:49:32.575602  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:32.575991  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:33.075769  103439 type.go:168] "Request Body" body=""
	I1002 20:49:33.075863  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:33.076201  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:33.576065  103439 type.go:168] "Request Body" body=""
	I1002 20:49:33.576159  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:33.576529  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:33.576605  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:34.076395  103439 type.go:168] "Request Body" body=""
	I1002 20:49:34.076474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:34.076849  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:34.575597  103439 type.go:168] "Request Body" body=""
	I1002 20:49:34.575671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:34.576060  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:35.075844  103439 type.go:168] "Request Body" body=""
	I1002 20:49:35.075929  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:35.076305  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:35.576145  103439 type.go:168] "Request Body" body=""
	I1002 20:49:35.576226  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:35.576568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:36.075334  103439 type.go:168] "Request Body" body=""
	I1002 20:49:36.075411  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:36.075806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:36.075863  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:36.575603  103439 type.go:168] "Request Body" body=""
	I1002 20:49:36.575675  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:36.576026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:37.075815  103439 type.go:168] "Request Body" body=""
	I1002 20:49:37.075895  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:37.076296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:37.576133  103439 type.go:168] "Request Body" body=""
	I1002 20:49:37.576211  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:37.576551  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:38.076393  103439 type.go:168] "Request Body" body=""
	I1002 20:49:38.076464  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:38.076847  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:38.076908  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:38.575667  103439 type.go:168] "Request Body" body=""
	I1002 20:49:38.575774  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:38.576122  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:39.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:49:39.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:39.076312  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:39.576198  103439 type.go:168] "Request Body" body=""
	I1002 20:49:39.576287  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:39.576659  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:40.075460  103439 type.go:168] "Request Body" body=""
	I1002 20:49:40.075544  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:40.075914  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:40.575679  103439 type.go:168] "Request Body" body=""
	I1002 20:49:40.575789  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:40.576134  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:40.576211  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:41.076023  103439 type.go:168] "Request Body" body=""
	I1002 20:49:41.076108  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:41.076444  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:41.576264  103439 type.go:168] "Request Body" body=""
	I1002 20:49:41.576340  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:41.576673  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:42.075461  103439 type.go:168] "Request Body" body=""
	I1002 20:49:42.075562  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:42.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:42.575679  103439 type.go:168] "Request Body" body=""
	I1002 20:49:42.575775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:42.576136  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:43.075963  103439 type.go:168] "Request Body" body=""
	I1002 20:49:43.076038  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:43.076375  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:43.076439  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:43.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:49:43.576333  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:43.576694  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:43.782991  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:49:43.835836  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:43.835901  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:43.835926  103439 retry.go:31] will retry after 22.850861202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:44.076251  103439 type.go:168] "Request Body" body=""
	I1002 20:49:44.076330  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:44.076662  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:44.576378  103439 type.go:168] "Request Body" body=""
	I1002 20:49:44.576459  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:44.576851  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:45.075622  103439 type.go:168] "Request Body" body=""
	I1002 20:49:45.075712  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:45.076088  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:45.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:49:45.575872  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:45.576194  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:45.576263  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:46.075799  103439 type.go:168] "Request Body" body=""
	I1002 20:49:46.075878  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:46.076248  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:46.576106  103439 type.go:168] "Request Body" body=""
	I1002 20:49:46.576212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:46.576565  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:47.075364  103439 type.go:168] "Request Body" body=""
	I1002 20:49:47.075444  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:47.075796  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:47.575534  103439 type.go:168] "Request Body" body=""
	I1002 20:49:47.575641  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:47.576000  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:48.075765  103439 type.go:168] "Request Body" body=""
	I1002 20:49:48.075841  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:48.076173  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:48.076233  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:48.576031  103439 type.go:168] "Request Body" body=""
	I1002 20:49:48.576136  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:48.576523  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:49.076388  103439 type.go:168] "Request Body" body=""
	I1002 20:49:49.076470  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:49.076836  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:49.575635  103439 type.go:168] "Request Body" body=""
	I1002 20:49:49.575728  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:49.576118  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:50.075933  103439 type.go:168] "Request Body" body=""
	I1002 20:49:50.076012  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:50.076363  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:50.076472  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:50.576327  103439 type.go:168] "Request Body" body=""
	I1002 20:49:50.576425  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:50.576803  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:51.075548  103439 type.go:168] "Request Body" body=""
	I1002 20:49:51.075627  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:51.075982  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:51.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:49:51.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:51.576150  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:52.075977  103439 type.go:168] "Request Body" body=""
	I1002 20:49:52.076055  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:52.076435  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:52.076515  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:52.328832  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:52.382480  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:52.382546  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:52.382704  103439 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:49:52.575971  103439 type.go:168] "Request Body" body=""
	I1002 20:49:52.576051  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:52.576411  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:53.076086  103439 type.go:168] "Request Body" body=""
	I1002 20:49:53.076192  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:53.076567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:53.576218  103439 type.go:168] "Request Body" body=""
	I1002 20:49:53.576298  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:53.576641  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:54.076333  103439 type.go:168] "Request Body" body=""
	I1002 20:49:54.076427  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:54.076837  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:54.076901  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:54.575348  103439 type.go:168] "Request Body" body=""
	I1002 20:49:54.575429  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:54.575793  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:55.075831  103439 type.go:168] "Request Body" body=""
	I1002 20:49:55.075927  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:55.076284  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:55.575878  103439 type.go:168] "Request Body" body=""
	I1002 20:49:55.575952  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:55.576307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:56.075954  103439 type.go:168] "Request Body" body=""
	I1002 20:49:56.076056  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:56.076429  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:56.576071  103439 type.go:168] "Request Body" body=""
	I1002 20:49:56.576174  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:56.576511  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:56.576569  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:57.076167  103439 type.go:168] "Request Body" body=""
	I1002 20:49:57.076292  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:57.076654  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:57.576317  103439 type.go:168] "Request Body" body=""
	I1002 20:49:57.576399  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:57.576791  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:58.075329  103439 type.go:168] "Request Body" body=""
	I1002 20:49:58.075426  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:58.075862  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:58.575784  103439 type.go:168] "Request Body" body=""
	I1002 20:49:58.575888  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:58.576288  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:59.075625  103439 type.go:168] "Request Body" body=""
	I1002 20:49:59.075696  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:59.076065  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:59.076136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:59.575793  103439 type.go:168] "Request Body" body=""
	I1002 20:49:59.575892  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:59.576323  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:00.076176  103439 type.go:168] "Request Body" body=""
	I1002 20:50:00.076256  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:00.076616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:00.575379  103439 type.go:168] "Request Body" body=""
	I1002 20:50:00.575456  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:00.575877  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:01.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:50:01.075760  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:01.076169  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:01.076232  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:01.576062  103439 type.go:168] "Request Body" body=""
	I1002 20:50:01.576155  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:01.576520  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:02.076405  103439 type.go:168] "Request Body" body=""
	I1002 20:50:02.076489  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:02.076943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:02.575716  103439 type.go:168] "Request Body" body=""
	I1002 20:50:02.575817  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:02.576177  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:03.076017  103439 type.go:168] "Request Body" body=""
	I1002 20:50:03.076108  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:03.076545  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:03.076613  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:03.575378  103439 type.go:168] "Request Body" body=""
	I1002 20:50:03.575465  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:03.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:04.075550  103439 type.go:168] "Request Body" body=""
	I1002 20:50:04.075623  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:04.076010  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:04.575808  103439 type.go:168] "Request Body" body=""
	I1002 20:50:04.575945  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:04.576301  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:05.076207  103439 type.go:168] "Request Body" body=""
	I1002 20:50:05.076281  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:05.076634  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:05.076700  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:05.575445  103439 type.go:168] "Request Body" body=""
	I1002 20:50:05.575527  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:05.575953  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.075700  103439 type.go:168] "Request Body" body=""
	I1002 20:50:06.075799  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:06.076172  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.575978  103439 type.go:168] "Request Body" body=""
	I1002 20:50:06.576053  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:06.576423  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.687689  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:50:06.737429  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:50:06.739791  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:50:06.739905  103439 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:50:06.742850  103439 out.go:179] * Enabled addons: 
	I1002 20:50:06.744531  103439 addons.go:514] duration metric: took 1m38.297120179s for enable addons: enabled=[]
	I1002 20:50:07.076348  103439 type.go:168] "Request Body" body=""
	I1002 20:50:07.076424  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:07.076810  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:07.076887  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:07.575585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:07.575664  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:07.576013  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:08.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:50:08.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:08.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:08.576074  103439 type.go:168] "Request Body" body=""
	I1002 20:50:08.576184  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:08.576885  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:09.075637  103439 type.go:168] "Request Body" body=""
	I1002 20:50:09.075726  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:09.076126  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:09.575856  103439 type.go:168] "Request Body" body=""
	I1002 20:50:09.575938  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:09.576289  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:09.576365  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:10.076213  103439 type.go:168] "Request Body" body=""
	I1002 20:50:10.076289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:10.076668  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:10.575384  103439 type.go:168] "Request Body" body=""
	I1002 20:50:10.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:10.575843  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:11.075634  103439 type.go:168] "Request Body" body=""
	I1002 20:50:11.075712  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:11.076109  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:11.575835  103439 type.go:168] "Request Body" body=""
	I1002 20:50:11.575921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:11.576276  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:12.076113  103439 type.go:168] "Request Body" body=""
	I1002 20:50:12.076186  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:12.076607  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:12.076677  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:12.575967  103439 type.go:168] "Request Body" body=""
	I1002 20:50:12.576054  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:12.576464  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:13.076341  103439 type.go:168] "Request Body" body=""
	I1002 20:50:13.076412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:13.076780  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:13.575533  103439 type.go:168] "Request Body" body=""
	I1002 20:50:13.575606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:13.576033  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:14.075814  103439 type.go:168] "Request Body" body=""
	I1002 20:50:14.075900  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:14.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:14.576194  103439 type.go:168] "Request Body" body=""
	I1002 20:50:14.576290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:14.576629  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:14.576695  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:15.075361  103439 type.go:168] "Request Body" body=""
	I1002 20:50:15.075442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:15.075840  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:15.575616  103439 type.go:168] "Request Body" body=""
	I1002 20:50:15.575700  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:15.576070  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:16.075838  103439 type.go:168] "Request Body" body=""
	I1002 20:50:16.075936  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:16.076365  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:16.576255  103439 type.go:168] "Request Body" body=""
	I1002 20:50:16.576335  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:16.576673  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:16.576732  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:17.075466  103439 type.go:168] "Request Body" body=""
	I1002 20:50:17.075545  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:17.075956  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:17.575727  103439 type.go:168] "Request Body" body=""
	I1002 20:50:17.575832  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:17.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:18.076032  103439 type.go:168] "Request Body" body=""
	I1002 20:50:18.076123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:18.076487  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:18.576201  103439 type.go:168] "Request Body" body=""
	I1002 20:50:18.576280  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:18.576630  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:19.075359  103439 type.go:168] "Request Body" body=""
	I1002 20:50:19.075436  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:19.075879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:19.075940  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:19.575662  103439 type.go:168] "Request Body" body=""
	I1002 20:50:19.575765  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:19.576112  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:20.075942  103439 type.go:168] "Request Body" body=""
	I1002 20:50:20.076022  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:20.076365  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:20.576167  103439 type.go:168] "Request Body" body=""
	I1002 20:50:20.576281  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:20.576638  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:21.075449  103439 type.go:168] "Request Body" body=""
	I1002 20:50:21.075533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:21.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:21.076012  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:21.575710  103439 type.go:168] "Request Body" body=""
	I1002 20:50:21.575816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:21.576163  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:22.076027  103439 type.go:168] "Request Body" body=""
	I1002 20:50:22.076112  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:22.076486  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:22.576328  103439 type.go:168] "Request Body" body=""
	I1002 20:50:22.576406  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:22.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:23.075575  103439 type.go:168] "Request Body" body=""
	I1002 20:50:23.075653  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:23.076015  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:23.076102  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:23.575919  103439 type.go:168] "Request Body" body=""
	I1002 20:50:23.576001  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:23.576441  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:24.076301  103439 type.go:168] "Request Body" body=""
	I1002 20:50:24.076385  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:24.076732  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:24.575497  103439 type.go:168] "Request Body" body=""
	I1002 20:50:24.575575  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:24.575977  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:25.075906  103439 type.go:168] "Request Body" body=""
	I1002 20:50:25.076002  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:25.076372  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:25.076430  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:25.575772  103439 type.go:168] "Request Body" body=""
	I1002 20:50:25.575847  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:25.576205  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:26.075989  103439 type.go:168] "Request Body" body=""
	I1002 20:50:26.076058  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:26.076440  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:26.576301  103439 type.go:168] "Request Body" body=""
	I1002 20:50:26.576389  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:26.576734  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:27.075548  103439 type.go:168] "Request Body" body=""
	I1002 20:50:27.075630  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:27.076087  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:27.575871  103439 type.go:168] "Request Body" body=""
	I1002 20:50:27.575960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:27.576295  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:27.576366  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:28.075983  103439 type.go:168] "Request Body" body=""
	I1002 20:50:28.076395  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:28.076839  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:28.575729  103439 type.go:168] "Request Body" body=""
	I1002 20:50:28.575838  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:28.576242  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:29.075826  103439 type.go:168] "Request Body" body=""
	I1002 20:50:29.075899  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:29.076269  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:29.576058  103439 type.go:168] "Request Body" body=""
	I1002 20:50:29.576161  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:29.576557  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:29.576620  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:30.075394  103439 type.go:168] "Request Body" body=""
	I1002 20:50:30.075476  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:30.075848  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:30.575440  103439 type.go:168] "Request Body" body=""
	I1002 20:50:30.575513  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:30.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:31.075504  103439 type.go:168] "Request Body" body=""
	I1002 20:50:31.075583  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:31.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:31.575533  103439 type.go:168] "Request Body" body=""
	I1002 20:50:31.575614  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:31.576035  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:32.075585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:32.075666  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:32.076026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:32.076094  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:32.575632  103439 type.go:168] "Request Body" body=""
	I1002 20:50:32.575709  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:32.576117  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:33.075652  103439 type.go:168] "Request Body" body=""
	I1002 20:50:33.075731  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:33.076100  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:33.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:50:33.575758  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:33.576149  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:34.075715  103439 type.go:168] "Request Body" body=""
	I1002 20:50:34.075810  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:34.076153  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:34.076216  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:34.575779  103439 type.go:168] "Request Body" body=""
	I1002 20:50:34.575858  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:34.576247  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:35.076148  103439 type.go:168] "Request Body" body=""
	I1002 20:50:35.076233  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:35.076598  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:35.576262  103439 type.go:168] "Request Body" body=""
	I1002 20:50:35.576347  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:35.576802  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:36.075374  103439 type.go:168] "Request Body" body=""
	I1002 20:50:36.075454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:36.075824  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:36.575422  103439 type.go:168] "Request Body" body=""
	I1002 20:50:36.575496  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:36.575848  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:36.575906  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:37.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:50:37.075521  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:37.075904  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:37.575460  103439 type.go:168] "Request Body" body=""
	I1002 20:50:37.575565  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:37.575952  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:38.075497  103439 type.go:168] "Request Body" body=""
	I1002 20:50:38.075579  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:38.075949  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:38.575843  103439 type.go:168] "Request Body" body=""
	I1002 20:50:38.575923  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:38.576292  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:38.576357  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:39.075970  103439 type.go:168] "Request Body" body=""
	I1002 20:50:39.076045  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:39.076459  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:39.576183  103439 type.go:168] "Request Body" body=""
	I1002 20:50:39.576276  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:39.576637  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:40.075394  103439 type.go:168] "Request Body" body=""
	I1002 20:50:40.075469  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:40.075856  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:40.575390  103439 type.go:168] "Request Body" body=""
	I1002 20:50:40.575465  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:40.575823  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:41.076191  103439 type.go:168] "Request Body" body=""
	I1002 20:50:41.076274  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:41.076628  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:41.076694  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:41.576291  103439 type.go:168] "Request Body" body=""
	I1002 20:50:41.576370  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:41.576770  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:42.076380  103439 type.go:168] "Request Body" body=""
	I1002 20:50:42.076481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:42.076834  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:42.575420  103439 type.go:168] "Request Body" body=""
	I1002 20:50:42.575496  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:42.575951  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:43.075513  103439 type.go:168] "Request Body" body=""
	I1002 20:50:43.075604  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:43.075967  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:43.575585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:43.575664  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:43.576070  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:43.576146  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:44.075681  103439 type.go:168] "Request Body" body=""
	I1002 20:50:44.075873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:44.076261  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:44.575868  103439 type.go:168] "Request Body" body=""
	I1002 20:50:44.575964  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:44.576327  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:45.076248  103439 type.go:168] "Request Body" body=""
	I1002 20:50:45.076357  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:45.076714  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:45.576035  103439 type.go:168] "Request Body" body=""
	I1002 20:50:45.576124  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:45.576501  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:45.576565  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:46.076153  103439 type.go:168] "Request Body" body=""
	I1002 20:50:46.076231  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:46.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:46.576261  103439 type.go:168] "Request Body" body=""
	I1002 20:50:46.576334  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:46.576706  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:47.076362  103439 type.go:168] "Request Body" body=""
	I1002 20:50:47.076446  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:47.076819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:47.575401  103439 type.go:168] "Request Body" body=""
	I1002 20:50:47.575474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:47.575854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:48.075429  103439 type.go:168] "Request Body" body=""
	I1002 20:50:48.075510  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:48.075856  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:48.075914  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:48.575411  103439 type.go:168] "Request Body" body=""
	I1002 20:50:48.575495  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:48.575887  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:49.075463  103439 type.go:168] "Request Body" body=""
	I1002 20:50:49.075543  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:49.075937  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:49.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:50:49.575579  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:49.575950  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:50.075789  103439 type.go:168] "Request Body" body=""
	I1002 20:50:50.075872  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:50.076231  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:50.076332  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:50.575815  103439 type.go:168] "Request Body" body=""
	I1002 20:50:50.575914  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:50.576296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:51.075877  103439 type.go:168] "Request Body" body=""
	I1002 20:50:51.075952  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:51.076337  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:51.576100  103439 type.go:168] "Request Body" body=""
	I1002 20:50:51.576202  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:51.576539  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:52.076187  103439 type.go:168] "Request Body" body=""
	I1002 20:50:52.076262  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:52.076592  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:52.076677  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:52.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:50:52.576403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:52.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:53.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:50:53.075460  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:53.075819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:53.575411  103439 type.go:168] "Request Body" body=""
	I1002 20:50:53.575520  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:53.575927  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:54.075511  103439 type.go:168] "Request Body" body=""
	I1002 20:50:54.075600  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:54.075971  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:54.575550  103439 type.go:168] "Request Body" body=""
	I1002 20:50:54.575643  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:54.576052  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:54.576136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:55.075833  103439 type.go:168] "Request Body" body=""
	I1002 20:50:55.075908  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:55.076313  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:55.575945  103439 type.go:168] "Request Body" body=""
	I1002 20:50:55.576033  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:55.576428  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:56.076124  103439 type.go:168] "Request Body" body=""
	I1002 20:50:56.076205  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:56.076588  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:56.576221  103439 type.go:168] "Request Body" body=""
	I1002 20:50:56.576325  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:56.576662  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:56.576724  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:57.076306  103439 type.go:168] "Request Body" body=""
	I1002 20:50:57.076386  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:57.076786  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:57.575325  103439 type.go:168] "Request Body" body=""
	I1002 20:50:57.575412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:57.575787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:58.076352  103439 type.go:168] "Request Body" body=""
	I1002 20:50:58.076422  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:58.076854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:58.575806  103439 type.go:168] "Request Body" body=""
	I1002 20:50:58.575901  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:58.576260  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:59.075853  103439 type.go:168] "Request Body" body=""
	I1002 20:50:59.075934  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:59.076321  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:59.076383  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:59.575967  103439 type.go:168] "Request Body" body=""
	I1002 20:50:59.576070  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:59.576437  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:00.076247  103439 type.go:168] "Request Body" body=""
	I1002 20:51:00.076327  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:00.076671  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:00.576348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:00.576435  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:00.576826  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:01.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:51:01.075456  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:01.075840  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:01.575383  103439 type.go:168] "Request Body" body=""
	I1002 20:51:01.575471  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:01.575834  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:01.575909  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:02.075405  103439 type.go:168] "Request Body" body=""
	I1002 20:51:02.075486  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:02.075854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:02.575427  103439 type.go:168] "Request Body" body=""
	I1002 20:51:02.575517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:02.575932  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:03.075458  103439 type.go:168] "Request Body" body=""
	I1002 20:51:03.075534  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:03.075891  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:03.576314  103439 type.go:168] "Request Body" body=""
	I1002 20:51:03.576387  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:03.576727  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:03.576806  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:04.076341  103439 type.go:168] "Request Body" body=""
	I1002 20:51:04.076414  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:04.076789  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:04.575407  103439 type.go:168] "Request Body" body=""
	I1002 20:51:04.575488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:04.575830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:05.075787  103439 type.go:168] "Request Body" body=""
	I1002 20:51:05.075860  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:05.076258  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:05.575847  103439 type.go:168] "Request Body" body=""
	I1002 20:51:05.575921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:05.576283  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:06.075890  103439 type.go:168] "Request Body" body=""
	I1002 20:51:06.075964  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:06.076395  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:06.076456  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:06.575993  103439 type.go:168] "Request Body" body=""
	I1002 20:51:06.576075  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:06.576412  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:07.076071  103439 type.go:168] "Request Body" body=""
	I1002 20:51:07.076154  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:07.076593  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:07.576229  103439 type.go:168] "Request Body" body=""
	I1002 20:51:07.576309  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:07.576657  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:08.076385  103439 type.go:168] "Request Body" body=""
	I1002 20:51:08.076464  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:08.076893  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:08.076954  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:08.575699  103439 type.go:168] "Request Body" body=""
	I1002 20:51:08.575787  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:08.576128  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:09.075675  103439 type.go:168] "Request Body" body=""
	I1002 20:51:09.075764  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:09.076126  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:09.576325  103439 type.go:168] "Request Body" body=""
	I1002 20:51:09.576432  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:09.576808  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:10.075645  103439 type.go:168] "Request Body" body=""
	I1002 20:51:10.075730  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:10.076142  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:10.575721  103439 type.go:168] "Request Body" body=""
	I1002 20:51:10.575820  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:10.576241  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:10.576304  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:11.075870  103439 type.go:168] "Request Body" body=""
	I1002 20:51:11.075955  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:11.076373  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:11.576041  103439 type.go:168] "Request Body" body=""
	I1002 20:51:11.576140  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:11.576505  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:12.076251  103439 type.go:168] "Request Body" body=""
	I1002 20:51:12.076345  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:12.076705  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:12.576352  103439 type.go:168] "Request Body" body=""
	I1002 20:51:12.576428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:12.576813  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:12.576892  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:13.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:51:13.075526  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:13.075917  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:13.575550  103439 type.go:168] "Request Body" body=""
	I1002 20:51:13.575640  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:13.576048  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:14.075644  103439 type.go:168] "Request Body" body=""
	I1002 20:51:14.075715  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:14.076108  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:14.575664  103439 type.go:168] "Request Body" body=""
	I1002 20:51:14.575795  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:14.576210  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:15.076065  103439 type.go:168] "Request Body" body=""
	I1002 20:51:15.076151  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:15.076548  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:15.076609  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:15.576209  103439 type.go:168] "Request Body" body=""
	I1002 20:51:15.576290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:15.576658  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:16.076387  103439 type.go:168] "Request Body" body=""
	I1002 20:51:16.076472  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:16.076818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:16.575432  103439 type.go:168] "Request Body" body=""
	I1002 20:51:16.575509  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:16.575925  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:17.075499  103439 type.go:168] "Request Body" body=""
	I1002 20:51:17.075588  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:17.075953  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:17.575636  103439 type.go:168] "Request Body" body=""
	I1002 20:51:17.575717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:17.576139  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:17.576206  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:18.075726  103439 type.go:168] "Request Body" body=""
	I1002 20:51:18.075840  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:18.076170  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:18.576043  103439 type.go:168] "Request Body" body=""
	I1002 20:51:18.576134  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:18.576500  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:19.076156  103439 type.go:168] "Request Body" body=""
	I1002 20:51:19.076230  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:19.076608  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:19.576287  103439 type.go:168] "Request Body" body=""
	I1002 20:51:19.576370  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:19.576719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:19.576823  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:20.075605  103439 type.go:168] "Request Body" body=""
	I1002 20:51:20.075689  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:20.076064  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:20.575671  103439 type.go:168] "Request Body" body=""
	I1002 20:51:20.575771  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:20.576160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:21.075760  103439 type.go:168] "Request Body" body=""
	I1002 20:51:21.075844  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:21.076251  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:21.575856  103439 type.go:168] "Request Body" body=""
	I1002 20:51:21.575946  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:21.576277  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:22.075938  103439 type.go:168] "Request Body" body=""
	I1002 20:51:22.076020  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:22.076385  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:22.076458  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:22.576058  103439 type.go:168] "Request Body" body=""
	I1002 20:51:22.576150  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:22.576496  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:23.076164  103439 type.go:168] "Request Body" body=""
	I1002 20:51:23.076256  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:23.076616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:23.576268  103439 type.go:168] "Request Body" body=""
	I1002 20:51:23.576350  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:23.576704  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:24.076361  103439 type.go:168] "Request Body" body=""
	I1002 20:51:24.076448  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:24.076818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:24.076882  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:24.575376  103439 type.go:168] "Request Body" body=""
	I1002 20:51:24.575452  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:24.575842  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:25.075817  103439 type.go:168] "Request Body" body=""
	I1002 20:51:25.075926  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:25.076324  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:25.575895  103439 type.go:168] "Request Body" body=""
	I1002 20:51:25.575977  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:25.576326  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:26.076018  103439 type.go:168] "Request Body" body=""
	I1002 20:51:26.076112  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:26.076484  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:26.576139  103439 type.go:168] "Request Body" body=""
	I1002 20:51:26.576216  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:26.576529  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:26.576601  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:27.076219  103439 type.go:168] "Request Body" body=""
	I1002 20:51:27.076333  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:27.076702  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:27.576348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:27.576421  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:27.576775  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:28.075392  103439 type.go:168] "Request Body" body=""
	I1002 20:51:28.075490  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:28.075928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:28.575733  103439 type.go:168] "Request Body" body=""
	I1002 20:51:28.575828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:28.576180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:29.075796  103439 type.go:168] "Request Body" body=""
	I1002 20:51:29.075881  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:29.076267  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:29.076325  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:29.575904  103439 type.go:168] "Request Body" body=""
	I1002 20:51:29.575995  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:29.576458  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:30.076348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:30.076430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:30.076826  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:30.575400  103439 type.go:168] "Request Body" body=""
	I1002 20:51:30.575481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:30.575844  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:31.075477  103439 type.go:168] "Request Body" body=""
	I1002 20:51:31.075558  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:31.076018  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:31.575552  103439 type.go:168] "Request Body" body=""
	I1002 20:51:31.575626  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:31.575957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:31.576019  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:32.075567  103439 type.go:168] "Request Body" body=""
	I1002 20:51:32.075648  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:32.076000  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:32.575617  103439 type.go:168] "Request Body" body=""
	I1002 20:51:32.575691  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:32.576091  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:33.075777  103439 type.go:168] "Request Body" body=""
	I1002 20:51:33.075867  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:33.076312  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:33.575892  103439 type.go:168] "Request Body" body=""
	I1002 20:51:33.575966  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:33.576360  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:33.576436  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:34.075990  103439 type.go:168] "Request Body" body=""
	I1002 20:51:34.076064  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:34.076423  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:34.576156  103439 type.go:168] "Request Body" body=""
	I1002 20:51:34.576242  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:34.576614  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:35.075451  103439 type.go:168] "Request Body" body=""
	I1002 20:51:35.075544  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:35.075944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:35.575553  103439 type.go:168] "Request Body" body=""
	I1002 20:51:35.575632  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:35.575984  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:36.075611  103439 type.go:168] "Request Body" body=""
	I1002 20:51:36.075690  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:36.076097  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:36.076170  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:36.575781  103439 type.go:168] "Request Body" body=""
	I1002 20:51:36.575857  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:36.576209  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:37.075787  103439 type.go:168] "Request Body" body=""
	I1002 20:51:37.075868  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:37.076233  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:37.575919  103439 type.go:168] "Request Body" body=""
	I1002 20:51:37.576016  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:37.576386  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:38.076037  103439 type.go:168] "Request Body" body=""
	I1002 20:51:38.076126  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:38.076506  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:38.076573  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:38.576216  103439 type.go:168] "Request Body" body=""
	I1002 20:51:38.576315  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:38.576715  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:39.076566  103439 type.go:168] "Request Body" body=""
	I1002 20:51:39.076671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:39.077118  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:39.575701  103439 type.go:168] "Request Body" body=""
	I1002 20:51:39.575832  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:39.576184  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:40.076137  103439 type.go:168] "Request Body" body=""
	I1002 20:51:40.076214  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:40.076550  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:40.076615  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:40.576291  103439 type.go:168] "Request Body" body=""
	I1002 20:51:40.576390  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:40.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:41.075322  103439 type.go:168] "Request Body" body=""
	I1002 20:51:41.075403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:41.075780  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:41.575391  103439 type.go:168] "Request Body" body=""
	I1002 20:51:41.575470  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:41.575870  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:42.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:51:42.075545  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:42.075943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:42.575565  103439 type.go:168] "Request Body" body=""
	I1002 20:51:42.575660  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:42.576053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:42.576127  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:43.075648  103439 type.go:168] "Request Body" body=""
	I1002 20:51:43.075718  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:43.076099  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:43.575699  103439 type.go:168] "Request Body" body=""
	I1002 20:51:43.575814  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:43.576217  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:44.075869  103439 type.go:168] "Request Body" body=""
	I1002 20:51:44.075942  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:44.076297  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:44.575859  103439 type.go:168] "Request Body" body=""
	I1002 20:51:44.575949  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:44.576319  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:44.576388  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:45.076331  103439 type.go:168] "Request Body" body=""
	I1002 20:51:45.076413  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:45.076728  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:45.575369  103439 type.go:168] "Request Body" body=""
	I1002 20:51:45.575463  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:45.575833  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:46.075482  103439 type.go:168] "Request Body" body=""
	I1002 20:51:46.075561  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:46.075954  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:46.575542  103439 type.go:168] "Request Body" body=""
	I1002 20:51:46.575624  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:46.575972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:47.075530  103439 type.go:168] "Request Body" body=""
	I1002 20:51:47.075605  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:47.076010  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:47.076101  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:47.575610  103439 type.go:168] "Request Body" body=""
	I1002 20:51:47.575685  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:47.576069  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:48.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:51:48.075809  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:48.076160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:48.576035  103439 type.go:168] "Request Body" body=""
	I1002 20:51:48.576123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:48.576499  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:49.076190  103439 type.go:168] "Request Body" body=""
	I1002 20:51:49.076263  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:49.076621  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:49.076681  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:49.576270  103439 type.go:168] "Request Body" body=""
	I1002 20:51:49.576351  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:49.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:50.075539  103439 type.go:168] "Request Body" body=""
	I1002 20:51:50.075624  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:50.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:50.575631  103439 type.go:168] "Request Body" body=""
	I1002 20:51:50.575707  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:50.576114  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:51.075711  103439 type.go:168] "Request Body" body=""
	I1002 20:51:51.075818  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:51.076157  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:51.575814  103439 type.go:168] "Request Body" body=""
	I1002 20:51:51.575890  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:51.576235  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:51.576316  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:52.075820  103439 type.go:168] "Request Body" body=""
	I1002 20:51:52.075911  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:52.076272  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:52.575858  103439 type.go:168] "Request Body" body=""
	I1002 20:51:52.575932  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:52.576284  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:53.075878  103439 type.go:168] "Request Body" body=""
	I1002 20:51:53.075963  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:53.076342  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:53.576038  103439 type.go:168] "Request Body" body=""
	I1002 20:51:53.576123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:53.576491  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:53.576559  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:54.076212  103439 type.go:168] "Request Body" body=""
	I1002 20:51:54.076289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:54.076627  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:54.576310  103439 type.go:168] "Request Body" body=""
	I1002 20:51:54.576389  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:54.576719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:55.075503  103439 type.go:168] "Request Body" body=""
	I1002 20:51:55.075581  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:55.075972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:55.575557  103439 type.go:168] "Request Body" body=""
	I1002 20:51:55.575642  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:55.576018  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:56.075601  103439 type.go:168] "Request Body" body=""
	I1002 20:51:56.075683  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:56.076064  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:56.076141  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:56.575721  103439 type.go:168] "Request Body" body=""
	I1002 20:51:56.575815  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:56.576144  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:57.075712  103439 type.go:168] "Request Body" body=""
	I1002 20:51:57.075821  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:57.076181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:57.575767  103439 type.go:168] "Request Body" body=""
	I1002 20:51:57.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:57.576216  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:58.075841  103439 type.go:168] "Request Body" body=""
	I1002 20:51:58.075920  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:58.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:58.076367  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:58.576187  103439 type.go:168] "Request Body" body=""
	I1002 20:51:58.576265  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:58.576613  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:59.076311  103439 type.go:168] "Request Body" body=""
	I1002 20:51:59.076391  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:59.076790  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:59.576375  103439 type.go:168] "Request Body" body=""
	I1002 20:51:59.576454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:59.576812  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:00.075544  103439 type.go:168] "Request Body" body=""
	I1002 20:52:00.075629  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:00.075981  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:00.575537  103439 type.go:168] "Request Body" body=""
	I1002 20:52:00.575633  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:00.576003  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:00.576089  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:01.075618  103439 type.go:168] "Request Body" body=""
	I1002 20:52:01.075698  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:01.076058  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:01.575676  103439 type.go:168] "Request Body" body=""
	I1002 20:52:01.575782  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:01.576133  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:02.075714  103439 type.go:168] "Request Body" body=""
	I1002 20:52:02.075815  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:02.076186  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:02.575783  103439 type.go:168] "Request Body" body=""
	I1002 20:52:02.575871  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:02.576224  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:02.576299  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:03.075796  103439 type.go:168] "Request Body" body=""
	I1002 20:52:03.075881  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:03.076235  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:03.575826  103439 type.go:168] "Request Body" body=""
	I1002 20:52:03.575903  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:03.576282  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:04.075959  103439 type.go:168] "Request Body" body=""
	I1002 20:52:04.076039  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:04.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:04.576109  103439 type.go:168] "Request Body" body=""
	I1002 20:52:04.576183  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:04.576520  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:04.576584  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:05.075455  103439 type.go:168] "Request Body" body=""
	I1002 20:52:05.075532  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:05.075890  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:05.575433  103439 type.go:168] "Request Body" body=""
	I1002 20:52:05.575505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:05.575871  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:06.075440  103439 type.go:168] "Request Body" body=""
	I1002 20:52:06.075523  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:06.075827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:06.575497  103439 type.go:168] "Request Body" body=""
	I1002 20:52:06.575590  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:06.576026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:07.075591  103439 type.go:168] "Request Body" body=""
	I1002 20:52:07.075672  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:07.076053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:07.076126  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:07.575663  103439 type.go:168] "Request Body" body=""
	I1002 20:52:07.575766  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:07.576128  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:08.075654  103439 type.go:168] "Request Body" body=""
	I1002 20:52:08.075729  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:08.076096  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:08.575925  103439 type.go:168] "Request Body" body=""
	I1002 20:52:08.576003  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:08.576346  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:09.076056  103439 type.go:168] "Request Body" body=""
	I1002 20:52:09.076147  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:09.076530  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:09.076595  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:09.576165  103439 type.go:168] "Request Body" body=""
	I1002 20:52:09.576244  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:09.576584  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:10.075437  103439 type.go:168] "Request Body" body=""
	I1002 20:52:10.075510  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:10.075873  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:10.575468  103439 type.go:168] "Request Body" body=""
	I1002 20:52:10.575558  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:10.575906  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:11.075492  103439 type.go:168] "Request Body" body=""
	I1002 20:52:11.075568  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:11.075940  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:11.575529  103439 type.go:168] "Request Body" body=""
	I1002 20:52:11.575621  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:11.575986  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:11.576046  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:12.075605  103439 type.go:168] "Request Body" body=""
	I1002 20:52:12.075682  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:12.076073  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:12.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:52:12.575763  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:12.576125  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:13.075722  103439 type.go:168] "Request Body" body=""
	I1002 20:52:13.075828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:13.076171  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:13.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:52:13.575836  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:13.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:13.576254  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:14.075831  103439 type.go:168] "Request Body" body=""
	I1002 20:52:14.075921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:14.076324  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:14.575966  103439 type.go:168] "Request Body" body=""
	I1002 20:52:14.576045  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:14.576396  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:15.076397  103439 type.go:168] "Request Body" body=""
	I1002 20:52:15.076484  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:15.076845  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:15.575989  103439 type.go:168] "Request Body" body=""
	I1002 20:52:15.576066  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:15.576461  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:15.576526  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:16.076140  103439 type.go:168] "Request Body" body=""
	I1002 20:52:16.076235  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:16.076620  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:16.576345  103439 type.go:168] "Request Body" body=""
	I1002 20:52:16.576420  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:16.576818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:17.075412  103439 type.go:168] "Request Body" body=""
	I1002 20:52:17.075504  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:17.075868  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:17.575510  103439 type.go:168] "Request Body" body=""
	I1002 20:52:17.575592  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:17.575975  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:18.075585  103439 type.go:168] "Request Body" body=""
	I1002 20:52:18.075665  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:18.076061  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:18.076136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:18.575985  103439 type.go:168] "Request Body" body=""
	I1002 20:52:18.576059  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:18.576415  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:19.076058  103439 type.go:168] "Request Body" body=""
	I1002 20:52:19.076159  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:19.076526  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:19.576216  103439 type.go:168] "Request Body" body=""
	I1002 20:52:19.576306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:19.576656  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:20.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:20.075668  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:20.076037  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:20.575615  103439 type.go:168] "Request Body" body=""
	I1002 20:52:20.575692  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:20.576056  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:20.576123  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:21.075653  103439 type.go:168] "Request Body" body=""
	I1002 20:52:21.075760  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:21.076104  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:21.575691  103439 type.go:168] "Request Body" body=""
	I1002 20:52:21.575787  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:21.576159  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:22.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:52:22.075808  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:22.076168  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:22.575725  103439 type.go:168] "Request Body" body=""
	I1002 20:52:22.575823  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:22.576174  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:22.576239  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:23.075794  103439 type.go:168] "Request Body" body=""
	I1002 20:52:23.075868  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:23.076225  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:23.575463  103439 type.go:168] "Request Body" body=""
	I1002 20:52:23.575550  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:23.575980  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:24.075592  103439 type.go:168] "Request Body" body=""
	I1002 20:52:24.075681  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:24.076032  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:24.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:52:24.575768  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:24.576132  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:25.075932  103439 type.go:168] "Request Body" body=""
	I1002 20:52:25.076017  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:25.076379  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:25.076450  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:25.576068  103439 type.go:168] "Request Body" body=""
	I1002 20:52:25.576165  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:25.576567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:26.076267  103439 type.go:168] "Request Body" body=""
	I1002 20:52:26.076346  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:26.076713  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:26.576395  103439 type.go:168] "Request Body" body=""
	I1002 20:52:26.576472  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:26.576858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:27.075411  103439 type.go:168] "Request Body" body=""
	I1002 20:52:27.075491  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:27.075850  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:27.575491  103439 type.go:168] "Request Body" body=""
	I1002 20:52:27.575573  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:27.575964  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:27.576028  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:28.075504  103439 type.go:168] "Request Body" body=""
	I1002 20:52:28.075596  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:28.075950  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:28.575839  103439 type.go:168] "Request Body" body=""
	I1002 20:52:28.576029  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:28.576476  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:29.075757  103439 type.go:168] "Request Body" body=""
	I1002 20:52:29.075848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:29.076242  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:29.575836  103439 type.go:168] "Request Body" body=""
	I1002 20:52:29.575917  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:29.576348  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:29.576430  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:30.076283  103439 type.go:168] "Request Body" body=""
	I1002 20:52:30.076376  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:30.076774  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:30.575345  103439 type.go:168] "Request Body" body=""
	I1002 20:52:30.575422  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:30.575772  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:31.075417  103439 type.go:168] "Request Body" body=""
	I1002 20:52:31.075490  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:31.075917  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:31.575405  103439 type.go:168] "Request Body" body=""
	I1002 20:52:31.575482  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:31.575879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:32.075416  103439 type.go:168] "Request Body" body=""
	I1002 20:52:32.075492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:32.075830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:32.075891  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:32.575384  103439 type.go:168] "Request Body" body=""
	I1002 20:52:32.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:32.575860  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:33.075424  103439 type.go:168] "Request Body" body=""
	I1002 20:52:33.075505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:33.075919  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:33.575575  103439 type.go:168] "Request Body" body=""
	I1002 20:52:33.575659  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:33.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:34.075603  103439 type.go:168] "Request Body" body=""
	I1002 20:52:34.075689  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:34.076059  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:34.076133  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:34.575643  103439 type.go:168] "Request Body" body=""
	I1002 20:52:34.575717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:34.576097  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:35.075919  103439 type.go:168] "Request Body" body=""
	I1002 20:52:35.076001  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:35.076401  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:35.576097  103439 type.go:168] "Request Body" body=""
	I1002 20:52:35.576190  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:35.576569  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:36.076242  103439 type.go:168] "Request Body" body=""
	I1002 20:52:36.076321  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:36.076684  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:36.076771  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:36.576350  103439 type.go:168] "Request Body" body=""
	I1002 20:52:36.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:36.576806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:37.075371  103439 type.go:168] "Request Body" body=""
	I1002 20:52:37.075445  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:37.075830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:37.575379  103439 type.go:168] "Request Body" body=""
	I1002 20:52:37.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:37.575827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:38.075420  103439 type.go:168] "Request Body" body=""
	I1002 20:52:38.075494  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:38.075864  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:38.575408  103439 type.go:168] "Request Body" body=""
	I1002 20:52:38.575505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:38.575831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:38.575904  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:39.075468  103439 type.go:168] "Request Body" body=""
	I1002 20:52:39.075555  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:39.075908  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:39.575486  103439 type.go:168] "Request Body" body=""
	I1002 20:52:39.575564  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:39.575943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:40.075840  103439 type.go:168] "Request Body" body=""
	I1002 20:52:40.075937  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:40.076335  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:40.576013  103439 type.go:168] "Request Body" body=""
	I1002 20:52:40.576104  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:40.576440  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:40.576500  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:41.076194  103439 type.go:168] "Request Body" body=""
	I1002 20:52:41.076306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:41.076712  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:41.575323  103439 type.go:168] "Request Body" body=""
	I1002 20:52:41.575412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:41.575799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:42.075383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:42.075484  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:42.075843  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:42.575392  103439 type.go:168] "Request Body" body=""
	I1002 20:52:42.575469  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:42.575828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:43.075519  103439 type.go:168] "Request Body" body=""
	I1002 20:52:43.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:43.076045  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:43.076121  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:43.575640  103439 type.go:168] "Request Body" body=""
	I1002 20:52:43.575711  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:43.576105  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:44.075717  103439 type.go:168] "Request Body" body=""
	I1002 20:52:44.075847  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:44.076211  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:44.575828  103439 type.go:168] "Request Body" body=""
	I1002 20:52:44.575911  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:44.576256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:45.076131  103439 type.go:168] "Request Body" body=""
	I1002 20:52:45.076212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:45.076558  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:45.076640  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:45.576225  103439 type.go:168] "Request Body" body=""
	I1002 20:52:45.576305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:45.576652  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:46.076299  103439 type.go:168] "Request Body" body=""
	I1002 20:52:46.076380  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:46.076766  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:46.575344  103439 type.go:168] "Request Body" body=""
	I1002 20:52:46.575417  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:46.575789  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:47.075373  103439 type.go:168] "Request Body" body=""
	I1002 20:52:47.075452  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:47.075833  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:47.575383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:47.575467  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:47.575823  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:47.575904  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:48.075383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:48.075461  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:48.075828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:48.575654  103439 type.go:168] "Request Body" body=""
	I1002 20:52:48.575753  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:48.576167  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:49.075788  103439 type.go:168] "Request Body" body=""
	I1002 20:52:49.075878  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:49.076256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:49.575841  103439 type.go:168] "Request Body" body=""
	I1002 20:52:49.575931  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:49.576281  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:49.576341  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:50.076152  103439 type.go:168] "Request Body" body=""
	I1002 20:52:50.076231  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:50.076577  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:50.576298  103439 type.go:168] "Request Body" body=""
	I1002 20:52:50.576372  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:50.576726  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:51.075356  103439 type.go:168] "Request Body" body=""
	I1002 20:52:51.075442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:51.075828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:51.575458  103439 type.go:168] "Request Body" body=""
	I1002 20:52:51.575551  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:51.575985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:52.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:52.075659  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:52.076041  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:52.076130  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:52.575624  103439 type.go:168] "Request Body" body=""
	I1002 20:52:52.575701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:52.576057  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:53.075653  103439 type.go:168] "Request Body" body=""
	I1002 20:52:53.075728  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:53.076123  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:53.575676  103439 type.go:168] "Request Body" body=""
	I1002 20:52:53.575779  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:53.576133  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:54.075709  103439 type.go:168] "Request Body" body=""
	I1002 20:52:54.075829  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:54.076213  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:54.076292  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:54.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:52:54.575875  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:54.576247  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:55.076140  103439 type.go:168] "Request Body" body=""
	I1002 20:52:55.076229  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:55.076568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:55.576341  103439 type.go:168] "Request Body" body=""
	I1002 20:52:55.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:55.576817  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:56.075357  103439 type.go:168] "Request Body" body=""
	I1002 20:52:56.075448  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:56.075831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:56.575413  103439 type.go:168] "Request Body" body=""
	I1002 20:52:56.575503  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:56.575861  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:56.575933  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:57.075427  103439 type.go:168] "Request Body" body=""
	I1002 20:52:57.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:57.076006  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:57.575579  103439 type.go:168] "Request Body" body=""
	I1002 20:52:57.575653  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:57.576016  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:58.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:58.075671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:58.076062  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:58.575986  103439 type.go:168] "Request Body" body=""
	I1002 20:52:58.576070  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:58.576405  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:58.576463  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:59.076072  103439 type.go:168] "Request Body" body=""
	I1002 20:52:59.076176  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:59.076539  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:59.576174  103439 type.go:168] "Request Body" body=""
	I1002 20:52:59.576247  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:59.576606  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:00.075451  103439 type.go:168] "Request Body" body=""
	I1002 20:53:00.075535  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:00.075944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:00.575527  103439 type.go:168] "Request Body" body=""
	I1002 20:53:00.575613  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:00.576021  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:01.075639  103439 type.go:168] "Request Body" body=""
	I1002 20:53:01.075720  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:01.076158  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:01.076236  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:01.575757  103439 type.go:168] "Request Body" body=""
	I1002 20:53:01.575840  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:01.576224  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:02.075855  103439 type.go:168] "Request Body" body=""
	I1002 20:53:02.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:02.076346  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:02.576050  103439 type.go:168] "Request Body" body=""
	I1002 20:53:02.576149  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:02.576502  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:03.076160  103439 type.go:168] "Request Body" body=""
	I1002 20:53:03.076234  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:03.076597  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:03.076676  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:03.575963  103439 type.go:168] "Request Body" body=""
	I1002 20:53:03.576036  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:03.576386  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:04.076077  103439 type.go:168] "Request Body" body=""
	I1002 20:53:04.076167  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:04.076509  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:04.576256  103439 type.go:168] "Request Body" body=""
	I1002 20:53:04.576341  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:04.576710  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:05.075500  103439 type.go:168] "Request Body" body=""
	I1002 20:53:05.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:05.076015  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:05.575620  103439 type.go:168] "Request Body" body=""
	I1002 20:53:05.575699  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:05.576053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:05.576126  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:06.075659  103439 type.go:168] "Request Body" body=""
	I1002 20:53:06.075778  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:06.076160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:06.575713  103439 type.go:168] "Request Body" body=""
	I1002 20:53:06.575808  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:06.576161  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:07.075791  103439 type.go:168] "Request Body" body=""
	I1002 20:53:07.075896  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:07.076278  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:07.575857  103439 type.go:168] "Request Body" body=""
	I1002 20:53:07.575932  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:07.576289  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:07.576361  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:08.075859  103439 type.go:168] "Request Body" body=""
	I1002 20:53:08.075955  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:08.076329  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:08.576047  103439 type.go:168] "Request Body" body=""
	I1002 20:53:08.576136  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:08.576492  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:09.076119  103439 type.go:168] "Request Body" body=""
	I1002 20:53:09.076215  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:09.076582  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:09.576306  103439 type.go:168] "Request Body" body=""
	I1002 20:53:09.576382  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:09.576707  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:09.576802  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:10.075438  103439 type.go:168] "Request Body" body=""
	I1002 20:53:10.075516  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:10.075948  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:10.575530  103439 type.go:168] "Request Body" body=""
	I1002 20:53:10.575609  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:10.575983  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:11.075661  103439 type.go:168] "Request Body" body=""
	I1002 20:53:11.075769  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:11.076130  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:11.575757  103439 type.go:168] "Request Body" body=""
	I1002 20:53:11.575830  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:11.576189  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:12.075811  103439 type.go:168] "Request Body" body=""
	I1002 20:53:12.075891  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:12.076252  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:12.076323  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:12.575823  103439 type.go:168] "Request Body" body=""
	I1002 20:53:12.575896  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:12.576250  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:13.075897  103439 type.go:168] "Request Body" body=""
	I1002 20:53:13.075987  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:13.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:13.576059  103439 type.go:168] "Request Body" body=""
	I1002 20:53:13.576149  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:13.576497  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:14.076230  103439 type.go:168] "Request Body" body=""
	I1002 20:53:14.076305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:14.076648  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:14.076724  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:14.576300  103439 type.go:168] "Request Body" body=""
	I1002 20:53:14.576375  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:14.576711  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:15.075457  103439 type.go:168] "Request Body" body=""
	I1002 20:53:15.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:15.075942  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:15.575476  103439 type.go:168] "Request Body" body=""
	I1002 20:53:15.575564  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:15.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:16.075498  103439 type.go:168] "Request Body" body=""
	I1002 20:53:16.075597  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:16.075974  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:16.575530  103439 type.go:168] "Request Body" body=""
	I1002 20:53:16.575607  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:16.575990  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:16.576057  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:17.075599  103439 type.go:168] "Request Body" body=""
	I1002 20:53:17.075683  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:17.076066  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:17.575633  103439 type.go:168] "Request Body" body=""
	I1002 20:53:17.575706  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:17.576088  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:18.075675  103439 type.go:168] "Request Body" body=""
	I1002 20:53:18.075775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:18.076143  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:18.575997  103439 type.go:168] "Request Body" body=""
	I1002 20:53:18.576068  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:18.576432  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:18.576492  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:19.076147  103439 type.go:168] "Request Body" body=""
	I1002 20:53:19.076228  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:19.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:19.576248  103439 type.go:168] "Request Body" body=""
	I1002 20:53:19.576332  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:19.576675  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:20.075447  103439 type.go:168] "Request Body" body=""
	I1002 20:53:20.075529  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:20.075898  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:20.575465  103439 type.go:168] "Request Body" body=""
	I1002 20:53:20.575538  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:20.575923  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:21.075521  103439 type.go:168] "Request Body" body=""
	I1002 20:53:21.075619  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:21.075978  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:21.076044  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:21.575665  103439 type.go:168] "Request Body" body=""
	I1002 20:53:21.575775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:21.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:22.075717  103439 type.go:168] "Request Body" body=""
	I1002 20:53:22.075828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:22.076183  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:22.575808  103439 type.go:168] "Request Body" body=""
	I1002 20:53:22.575897  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:22.576256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:23.075928  103439 type.go:168] "Request Body" body=""
	I1002 20:53:23.076009  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:23.076405  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:23.076478  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:23.576093  103439 type.go:168] "Request Body" body=""
	I1002 20:53:23.576168  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:23.576558  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:24.076203  103439 type.go:168] "Request Body" body=""
	I1002 20:53:24.076290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:24.076643  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:24.576321  103439 type.go:168] "Request Body" body=""
	I1002 20:53:24.576404  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:24.576814  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:25.075708  103439 type.go:168] "Request Body" body=""
	I1002 20:53:25.075822  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:25.076180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:25.575791  103439 type.go:168] "Request Body" body=""
	I1002 20:53:25.575873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:25.576263  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:25.576328  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:26.075894  103439 type.go:168] "Request Body" body=""
	I1002 20:53:26.075978  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:26.076323  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:26.576003  103439 type.go:168] "Request Body" body=""
	I1002 20:53:26.576076  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:26.576445  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:27.076142  103439 type.go:168] "Request Body" body=""
	I1002 20:53:27.076232  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:27.076600  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:27.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:53:27.576332  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:27.576701  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:27.576806  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:28.076370  103439 type.go:168] "Request Body" body=""
	I1002 20:53:28.076473  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:28.076858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:28.575697  103439 type.go:168] "Request Body" body=""
	I1002 20:53:28.575806  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:28.576163  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:29.075772  103439 type.go:168] "Request Body" body=""
	I1002 20:53:29.075851  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:29.076254  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:29.575812  103439 type.go:168] "Request Body" body=""
	I1002 20:53:29.575887  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:29.576260  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:30.076121  103439 type.go:168] "Request Body" body=""
	I1002 20:53:30.076195  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:30.076543  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:30.076603  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:30.576211  103439 type.go:168] "Request Body" body=""
	I1002 20:53:30.576293  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:30.576650  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:31.076346  103439 type.go:168] "Request Body" body=""
	I1002 20:53:31.076423  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:31.076802  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:31.575356  103439 type.go:168] "Request Body" body=""
	I1002 20:53:31.575434  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:31.575808  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:32.075359  103439 type.go:168] "Request Body" body=""
	I1002 20:53:32.075437  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:32.075799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:32.575336  103439 type.go:168] "Request Body" body=""
	I1002 20:53:32.575410  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:32.575777  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:32.575837  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:33.075392  103439 type.go:168] "Request Body" body=""
	I1002 20:53:33.075475  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:33.075865  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:33.575440  103439 type.go:168] "Request Body" body=""
	I1002 20:53:33.575517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:33.575846  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:34.075534  103439 type.go:168] "Request Body" body=""
	I1002 20:53:34.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:34.075996  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:34.575566  103439 type.go:168] "Request Body" body=""
	I1002 20:53:34.575655  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:34.576020  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:34.576093  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:35.075839  103439 type.go:168] "Request Body" body=""
	I1002 20:53:35.075921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:35.076292  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:35.575879  103439 type.go:168] "Request Body" body=""
	I1002 20:53:35.575953  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:35.576311  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:36.075998  103439 type.go:168] "Request Body" body=""
	I1002 20:53:36.076095  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:36.076469  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:36.576150  103439 type.go:168] "Request Body" body=""
	I1002 20:53:36.576229  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:36.576577  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:36.576639  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:37.076335  103439 type.go:168] "Request Body" body=""
	I1002 20:53:37.076417  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:37.076801  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:37.575377  103439 type.go:168] "Request Body" body=""
	I1002 20:53:37.575453  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:37.575879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:38.075474  103439 type.go:168] "Request Body" body=""
	I1002 20:53:38.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:38.075957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:38.575859  103439 type.go:168] "Request Body" body=""
	I1002 20:53:38.575935  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:38.576296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:39.076017  103439 type.go:168] "Request Body" body=""
	I1002 20:53:39.076111  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:39.076475  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:39.076596  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:39.576181  103439 type.go:168] "Request Body" body=""
	I1002 20:53:39.576257  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:39.576614  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:40.075456  103439 type.go:168] "Request Body" body=""
	I1002 20:53:40.075533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:40.075956  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:40.575509  103439 type.go:168] "Request Body" body=""
	I1002 20:53:40.575586  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:40.575951  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:41.075524  103439 type.go:168] "Request Body" body=""
	I1002 20:53:41.075607  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:41.075983  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:41.575591  103439 type.go:168] "Request Body" body=""
	I1002 20:53:41.575678  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:41.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:41.576118  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:42.075648  103439 type.go:168] "Request Body" body=""
	I1002 20:53:42.075731  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:42.076108  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:42.575677  103439 type.go:168] "Request Body" body=""
	I1002 20:53:42.575790  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:42.576150  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:43.075731  103439 type.go:168] "Request Body" body=""
	I1002 20:53:43.075831  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:43.076198  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:43.575889  103439 type.go:168] "Request Body" body=""
	I1002 20:53:43.575972  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:43.576366  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:43.576426  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:44.075602  103439 type.go:168] "Request Body" body=""
	I1002 20:53:44.075701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:44.076125  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:44.575700  103439 type.go:168] "Request Body" body=""
	I1002 20:53:44.575816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:44.576238  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:45.076167  103439 type.go:168] "Request Body" body=""
	I1002 20:53:45.076247  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:45.076676  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:45.576379  103439 type.go:168] "Request Body" body=""
	I1002 20:53:45.576462  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:45.576855  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:45.576932  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:46.075425  103439 type.go:168] "Request Body" body=""
	I1002 20:53:46.075515  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:46.075882  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:46.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:53:46.575563  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:46.575944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:47.075576  103439 type.go:168] "Request Body" body=""
	I1002 20:53:47.075649  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:47.076028  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:47.575645  103439 type.go:168] "Request Body" body=""
	I1002 20:53:47.575724  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:47.576173  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:48.075842  103439 type.go:168] "Request Body" body=""
	I1002 20:53:48.075922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:48.076288  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:48.076360  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:48.576176  103439 type.go:168] "Request Body" body=""
	I1002 20:53:48.576259  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:48.576606  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:49.076289  103439 type.go:168] "Request Body" body=""
	I1002 20:53:49.076364  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:49.076718  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:49.575397  103439 type.go:168] "Request Body" body=""
	I1002 20:53:49.575476  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:49.575864  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:50.075484  103439 type.go:168] "Request Body" body=""
	I1002 20:53:50.075575  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:50.075985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:50.575634  103439 type.go:168] "Request Body" body=""
	I1002 20:53:50.575725  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:50.576140  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:50.576223  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:51.075766  103439 type.go:168] "Request Body" body=""
	I1002 20:53:51.075855  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:51.076251  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:51.575845  103439 type.go:168] "Request Body" body=""
	I1002 20:53:51.575936  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:51.576310  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:52.076007  103439 type.go:168] "Request Body" body=""
	I1002 20:53:52.076100  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:52.076512  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:52.576200  103439 type.go:168] "Request Body" body=""
	I1002 20:53:52.576311  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:52.576659  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:52.576723  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:53.076346  103439 type.go:168] "Request Body" body=""
	I1002 20:53:53.076426  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:53.076819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:53.575357  103439 type.go:168] "Request Body" body=""
	I1002 20:53:53.575435  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:53.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:54.075408  103439 type.go:168] "Request Body" body=""
	I1002 20:53:54.075485  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:54.075889  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:54.575457  103439 type.go:168] "Request Body" body=""
	I1002 20:53:54.575534  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:54.575882  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:55.075838  103439 type.go:168] "Request Body" body=""
	I1002 20:53:55.075915  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:55.076266  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:55.076327  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:55.575878  103439 type.go:168] "Request Body" body=""
	I1002 20:53:55.575957  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:55.576307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:56.075931  103439 type.go:168] "Request Body" body=""
	I1002 20:53:56.076017  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:56.076382  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:56.576046  103439 type.go:168] "Request Body" body=""
	I1002 20:53:56.576133  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:56.576476  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:57.076106  103439 type.go:168] "Request Body" body=""
	I1002 20:53:57.076183  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:57.076505  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:57.076565  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:57.576226  103439 type.go:168] "Request Body" body=""
	I1002 20:53:57.576298  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:57.576629  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:58.076297  103439 type.go:168] "Request Body" body=""
	I1002 20:53:58.076394  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:58.076731  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:58.575639  103439 type.go:168] "Request Body" body=""
	I1002 20:53:58.575725  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:58.576105  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:59.075691  103439 type.go:168] "Request Body" body=""
	I1002 20:53:59.075862  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:59.076223  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:59.575805  103439 type.go:168] "Request Body" body=""
	I1002 20:53:59.575887  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:59.576267  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:59.576342  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:00.076234  103439 type.go:168] "Request Body" body=""
	I1002 20:54:00.076318  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:00.076665  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:00.576298  103439 type.go:168] "Request Body" body=""
	I1002 20:54:00.576374  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:00.576723  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:01.075366  103439 type.go:168] "Request Body" body=""
	I1002 20:54:01.075454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:01.075825  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:01.575447  103439 type.go:168] "Request Body" body=""
	I1002 20:54:01.575533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:01.575904  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:02.075556  103439 type.go:168] "Request Body" body=""
	I1002 20:54:02.075644  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:02.076053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:02.076132  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:02.575602  103439 type.go:168] "Request Body" body=""
	I1002 20:54:02.575678  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:02.576035  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:03.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:03.075713  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:03.076098  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:03.575655  103439 type.go:168] "Request Body" body=""
	I1002 20:54:03.575732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:03.576098  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:04.075645  103439 type.go:168] "Request Body" body=""
	I1002 20:54:04.075732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:04.076102  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:04.076162  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:04.575674  103439 type.go:168] "Request Body" body=""
	I1002 20:54:04.575774  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:04.576120  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:05.075981  103439 type.go:168] "Request Body" body=""
	I1002 20:54:05.076063  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:05.076424  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:05.576045  103439 type.go:168] "Request Body" body=""
	I1002 20:54:05.576128  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:05.576498  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:06.076278  103439 type.go:168] "Request Body" body=""
	I1002 20:54:06.076361  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:06.076719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:06.076815  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:06.575347  103439 type.go:168] "Request Body" body=""
	I1002 20:54:06.575428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:06.575821  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:07.075435  103439 type.go:168] "Request Body" body=""
	I1002 20:54:07.075516  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:07.075897  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:07.575486  103439 type.go:168] "Request Body" body=""
	I1002 20:54:07.575563  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:07.575958  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:08.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:08.075701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:08.076060  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:08.575979  103439 type.go:168] "Request Body" body=""
	I1002 20:54:08.576066  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:08.576467  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:08.576529  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:09.076208  103439 type.go:168] "Request Body" body=""
	I1002 20:54:09.076292  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:09.076707  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:09.576320  103439 type.go:168] "Request Body" body=""
	I1002 20:54:09.576395  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:09.576817  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:10.075592  103439 type.go:168] "Request Body" body=""
	I1002 20:54:10.075669  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:10.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:10.575606  103439 type.go:168] "Request Body" body=""
	I1002 20:54:10.575688  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:10.576056  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:11.075680  103439 type.go:168] "Request Body" body=""
	I1002 20:54:11.075788  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:11.076183  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:11.076274  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:11.575788  103439 type.go:168] "Request Body" body=""
	I1002 20:54:11.575870  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:11.576222  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:12.075860  103439 type.go:168] "Request Body" body=""
	I1002 20:54:12.075940  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:12.076307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:12.575971  103439 type.go:168] "Request Body" body=""
	I1002 20:54:12.576043  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:12.576403  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:13.076171  103439 type.go:168] "Request Body" body=""
	I1002 20:54:13.076258  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:13.076628  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:13.076688  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:13.576261  103439 type.go:168] "Request Body" body=""
	I1002 20:54:13.576339  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:13.576685  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:14.076408  103439 type.go:168] "Request Body" body=""
	I1002 20:54:14.076488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:14.076857  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:14.575484  103439 type.go:168] "Request Body" body=""
	I1002 20:54:14.575582  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:14.575948  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:15.075808  103439 type.go:168] "Request Body" body=""
	I1002 20:54:15.075891  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:15.076275  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:15.575894  103439 type.go:168] "Request Body" body=""
	I1002 20:54:15.575975  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:15.576435  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:15.576516  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:16.076119  103439 type.go:168] "Request Body" body=""
	I1002 20:54:16.076226  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:16.076603  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:16.576326  103439 type.go:168] "Request Body" body=""
	I1002 20:54:16.576403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:16.576788  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:17.075351  103439 type.go:168] "Request Body" body=""
	I1002 20:54:17.075430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:17.075787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:17.575401  103439 type.go:168] "Request Body" body=""
	I1002 20:54:17.575559  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:17.575961  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:18.075538  103439 type.go:168] "Request Body" body=""
	I1002 20:54:18.075619  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:18.075997  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:18.076063  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:18.575954  103439 type.go:168] "Request Body" body=""
	I1002 20:54:18.576031  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:18.576391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:19.076057  103439 type.go:168] "Request Body" body=""
	I1002 20:54:19.076145  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:19.076521  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:19.576266  103439 type.go:168] "Request Body" body=""
	I1002 20:54:19.576354  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:19.576728  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:20.075522  103439 type.go:168] "Request Body" body=""
	I1002 20:54:20.075613  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:20.075992  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:20.575620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:20.575699  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:20.576111  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:20.576172  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:21.075690  103439 type.go:168] "Request Body" body=""
	I1002 20:54:21.075834  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:21.076211  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:21.575853  103439 type.go:168] "Request Body" body=""
	I1002 20:54:21.575938  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:21.576327  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:22.076012  103439 type.go:168] "Request Body" body=""
	I1002 20:54:22.076106  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:22.076455  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:22.576180  103439 type.go:168] "Request Body" body=""
	I1002 20:54:22.576267  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:22.576639  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:22.576703  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:23.076280  103439 type.go:168] "Request Body" body=""
	I1002 20:54:23.076362  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:23.076729  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:23.575332  103439 type.go:168] "Request Body" body=""
	I1002 20:54:23.575409  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:23.575788  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:24.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:54:24.075455  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:24.075827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:24.575436  103439 type.go:168] "Request Body" body=""
	I1002 20:54:24.575524  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:24.575897  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:25.075680  103439 type.go:168] "Request Body" body=""
	I1002 20:54:25.075782  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:25.076141  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:25.076204  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:25.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:54:25.575836  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:25.576238  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:26.075827  103439 type.go:168] "Request Body" body=""
	I1002 20:54:26.075905  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:26.076277  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:26.576092  103439 type.go:168] "Request Body" body=""
	I1002 20:54:26.576245  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:26.576650  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:27.076357  103439 type.go:168] "Request Body" body=""
	I1002 20:54:27.076442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:27.076807  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:27.076864  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:27.575463  103439 type.go:168] "Request Body" body=""
	I1002 20:54:27.575541  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:27.576016  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:28.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:28.075717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:28.076117  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:28.576130  103439 type.go:168] "Request Body" body=""
	I1002 20:54:28.576214  103439 node_ready.go:38] duration metric: took 6m0.001003861s for node "functional-012915" to be "Ready" ...
	I1002 20:54:28.579396  103439 out.go:203] 
	W1002 20:54:28.581273  103439 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:54:28.581294  103439 out.go:285] * 
	W1002 20:54:28.583020  103439 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:54:28.584974  103439 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:54:36 functional-012915 crio[2919]: time="2025-10-02T20:54:36.095866045Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=917301ce-b6e2-4c48-adbb-d577c3ef3c79 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:36 functional-012915 crio[2919]: time="2025-10-02T20:54:36.120171825Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=03c6897a-01f9-4931-a642-8cee0ed2872f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:36 functional-012915 crio[2919]: time="2025-10-02T20:54:36.120300425Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=03c6897a-01f9-4931-a642-8cee0ed2872f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:36 functional-012915 crio[2919]: time="2025-10-02T20:54:36.120353774Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=03c6897a-01f9-4931-a642-8cee0ed2872f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:37 functional-012915 crio[2919]: time="2025-10-02T20:54:37.35253682Z" level=info msg="Checking image status: minikube-local-cache-test:functional-012915" id=b9bc8cc2-58c7-42ac-9777-ff5d4858beaa name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:37 functional-012915 crio[2919]: time="2025-10-02T20:54:37.375831229Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-012915" id=6395d18a-defd-4b67-b95a-11d9aee200ca name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:37 functional-012915 crio[2919]: time="2025-10-02T20:54:37.375967456Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-012915 not found" id=6395d18a-defd-4b67-b95a-11d9aee200ca name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:37 functional-012915 crio[2919]: time="2025-10-02T20:54:37.37599904Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-012915 found" id=6395d18a-defd-4b67-b95a-11d9aee200ca name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:37 functional-012915 crio[2919]: time="2025-10-02T20:54:37.40038513Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-012915" id=926c2e98-32aa-4250-ae42-2dcbbc4870ff name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:37 functional-012915 crio[2919]: time="2025-10-02T20:54:37.400538843Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-012915 not found" id=926c2e98-32aa-4250-ae42-2dcbbc4870ff name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:37 functional-012915 crio[2919]: time="2025-10-02T20:54:37.400593585Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-012915 found" id=926c2e98-32aa-4250-ae42-2dcbbc4870ff name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.132348383Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=fa53cd29-447a-42e9-b93a-25a28e68ae7a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.426552097Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f27855ae-5e09-49c2-9180-f4f95314b986 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.426684113Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f27855ae-5e09-49c2-9180-f4f95314b986 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.426721338Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f27855ae-5e09-49c2-9180-f4f95314b986 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.864572409Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=bd2fdb0c-ee85-4f1f-ba1f-4d410640930b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.864779357Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=bd2fdb0c-ee85-4f1f-ba1f-4d410640930b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.864827587Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=bd2fdb0c-ee85-4f1f-ba1f-4d410640930b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.890484237Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=19fbbaa0-e85b-4b5c-a1a4-08e281d8148a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.890631905Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=19fbbaa0-e85b-4b5c-a1a4-08e281d8148a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.890675201Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=19fbbaa0-e85b-4b5c-a1a4-08e281d8148a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.914567456Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8dc96eba-012b-41ac-a775-db61a6107b2f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.914714164Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=8dc96eba-012b-41ac-a775-db61a6107b2f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.914774629Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=8dc96eba-012b-41ac-a775-db61a6107b2f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:39 functional-012915 crio[2919]: time="2025-10-02T20:54:39.38454262Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=042112ea-c568-48d2-8cce-9b4035e7b4d2 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:54:40.771902    5267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:40.772714    5267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:40.774329    5267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:40.774820    5267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:40.776377    5267 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:54:40 up  2:37,  0 user,  load average: 0.60, 0.16, 0.36
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:54:30 functional-012915 kubelet[1773]: E1002 20:54:30.319199    1773 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-012915.186ac76a13674072\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac76a13674072  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 20:44:22.84759461 +0000 UTC m=+0.324743301,LastTimestamp:2025-10-02 20:44:22.84910367 +0000 UTC m=+0.326252362,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingIn
stance:functional-012915,}"
	Oct 02 20:54:30 functional-012915 kubelet[1773]: E1002 20:54:30.538652    1773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:54:30 functional-012915 kubelet[1773]: I1002 20:54:30.743699    1773 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 20:54:30 functional-012915 kubelet[1773]: E1002 20:54:30.744162    1773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 20:54:32 functional-012915 kubelet[1773]: E1002 20:54:32.854848    1773 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 20:54:32 functional-012915 kubelet[1773]: E1002 20:54:32.882849    1773 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:54:32 functional-012915 kubelet[1773]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:32 functional-012915 kubelet[1773]:  > podSandboxID="40e327266da6ea4287d08a8331b8fae96b768bae7d96ad99222891f51d752347"
	Oct 02 20:54:32 functional-012915 kubelet[1773]: E1002 20:54:32.882963    1773 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:54:32 functional-012915 kubelet[1773]:         container kube-scheduler start failed in pod kube-scheduler-functional-012915_kube-system(8a66ab49d7c80b396ab0e8b46c39b696): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:32 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:54:32 functional-012915 kubelet[1773]: E1002 20:54:32.883008    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-012915" podUID="8a66ab49d7c80b396ab0e8b46c39b696"
	Oct 02 20:54:32 functional-012915 kubelet[1773]: E1002 20:54:32.897258    1773 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-012915\" not found"
	Oct 02 20:54:33 functional-012915 kubelet[1773]: E1002 20:54:33.854495    1773 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 20:54:33 functional-012915 kubelet[1773]: E1002 20:54:33.885785    1773 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:54:33 functional-012915 kubelet[1773]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:33 functional-012915 kubelet[1773]:  > podSandboxID="81cb2ca5ac7acf1d0ec52dc7e36a2ebe21590776e2855b6e5546c94b7dad3e89"
	Oct 02 20:54:33 functional-012915 kubelet[1773]: E1002 20:54:33.885933    1773 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:54:33 functional-012915 kubelet[1773]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-012915_kube-system(7e750209f40bc1241cc38d19476e612c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:33 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:54:33 functional-012915 kubelet[1773]: E1002 20:54:33.885985    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-012915" podUID="7e750209f40bc1241cc38d19476e612c"
	Oct 02 20:54:37 functional-012915 kubelet[1773]: E1002 20:54:37.540254    1773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:54:37 functional-012915 kubelet[1773]: I1002 20:54:37.745682    1773 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 20:54:37 functional-012915 kubelet[1773]: E1002 20:54:37.746097    1773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 20:54:40 functional-012915 kubelet[1773]: E1002 20:54:40.320317    1773 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-012915.186ac76a13674072\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac76a13674072  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 20:44:22.84759461 +0000 UTC m=+0.324743301,LastTimestamp:2025-10-02 20:44:22.84910367 +0000 UTC m=+0.326252362,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingIn
stance:functional-012915,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (308.624062ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (2.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-012915 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-012915 get pods: exit status 1 (99.975923ms)

                                                
                                                
** stderr ** 
	E1002 20:54:41.716429  109372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:41.716817  109372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:41.718259  109372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:41.718552  109372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 20:54:41.719969  109372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-012915 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (295.083858ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ pause   │ nospam-461767 --log_dir /tmp/nospam-461767 pause                                                              │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ delete  │ -p nospam-461767                                                                                              │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ start   │ -p functional-012915 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │                     │
	│ start   │ -p functional-012915 --alsologtostderr -v=8                                                                   │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:48 UTC │                     │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:3.1                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:3.3                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:latest                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add minikube-local-cache-test:functional-012915                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache delete minikube-local-cache-test:functional-012915                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl images                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	│ cache   │ functional-012915 cache reload                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ kubectl │ functional-012915 kubectl -- --context functional-012915 get pods                                             │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:48:24
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:48:24.799042  103439 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:48:24.799301  103439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:24.799310  103439 out.go:374] Setting ErrFile to fd 2...
	I1002 20:48:24.799319  103439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:48:24.799517  103439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:48:24.799997  103439 out.go:368] Setting JSON to false
	I1002 20:48:24.800864  103439 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9046,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:48:24.800953  103439 start.go:140] virtualization: kvm guest
	I1002 20:48:24.803402  103439 out.go:179] * [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:48:24.804691  103439 notify.go:220] Checking for updates...
	I1002 20:48:24.804714  103439 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:48:24.806239  103439 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:48:24.807535  103439 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:24.808966  103439 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:48:24.810229  103439 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:48:24.811490  103439 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:48:24.813239  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:24.813364  103439 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:48:24.837336  103439 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:48:24.837438  103439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:48:24.897484  103439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:48:24.886469072 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:48:24.897616  103439 docker.go:318] overlay module found
	I1002 20:48:24.900384  103439 out.go:179] * Using the docker driver based on existing profile
	I1002 20:48:24.901640  103439 start.go:304] selected driver: docker
	I1002 20:48:24.901656  103439 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:24.901817  103439 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:48:24.901921  103439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:48:24.957281  103439 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 20:48:24.94713494 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:48:24.957915  103439 cni.go:84] Creating CNI manager for ""
	I1002 20:48:24.957982  103439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:48:24.958030  103439 start.go:348] cluster config:
	{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:24.959902  103439 out.go:179] * Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	I1002 20:48:24.961424  103439 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 20:48:24.962912  103439 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:48:24.964111  103439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:48:24.964148  103439 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:48:24.964157  103439 cache.go:58] Caching tarball of preloaded images
	I1002 20:48:24.964205  103439 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:48:24.964264  103439 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:48:24.964275  103439 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:48:24.964363  103439 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json ...
	I1002 20:48:24.984848  103439 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:48:24.984867  103439 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:48:24.984883  103439 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:48:24.984905  103439 start.go:360] acquireMachinesLock for functional-012915: {Name:mk05b0465db6f8234fcb55c21a78a37886923b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:48:24.984974  103439 start.go:364] duration metric: took 38.359µs to acquireMachinesLock for "functional-012915"
	I1002 20:48:24.984991  103439 start.go:96] Skipping create...Using existing machine configuration
	I1002 20:48:24.984998  103439 fix.go:54] fixHost starting: 
	I1002 20:48:24.985199  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:25.001871  103439 fix.go:112] recreateIfNeeded on functional-012915: state=Running err=<nil>
	W1002 20:48:25.001898  103439 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 20:48:25.003929  103439 out.go:252] * Updating the running docker "functional-012915" container ...
	I1002 20:48:25.003964  103439 machine.go:93] provisionDockerMachine start ...
	I1002 20:48:25.004037  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.020996  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.021230  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.021243  103439 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:48:25.163676  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:48:25.163710  103439 ubuntu.go:182] provisioning hostname "functional-012915"
	I1002 20:48:25.163781  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.181773  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.181995  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.182012  103439 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-012915 && echo "functional-012915" | sudo tee /etc/hostname
	I1002 20:48:25.333959  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:48:25.334023  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.352331  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.352586  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.352605  103439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-012915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-012915/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-012915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:48:25.495627  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:48:25.495660  103439 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 20:48:25.495680  103439 ubuntu.go:190] setting up certificates
	I1002 20:48:25.495691  103439 provision.go:84] configureAuth start
	I1002 20:48:25.495761  103439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:48:25.513229  103439 provision.go:143] copyHostCerts
	I1002 20:48:25.513269  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:48:25.513297  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 20:48:25.513309  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:48:25.513378  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 20:48:25.513471  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:48:25.513489  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 20:48:25.513496  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:48:25.513524  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 20:48:25.513585  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:48:25.513606  103439 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 20:48:25.513612  103439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:48:25.513642  103439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 20:48:25.513706  103439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.functional-012915 san=[127.0.0.1 192.168.49.2 functional-012915 localhost minikube]
	I1002 20:48:25.699700  103439 provision.go:177] copyRemoteCerts
	I1002 20:48:25.699774  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:48:25.699818  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.717132  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:25.819529  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 20:48:25.819590  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:48:25.836961  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 20:48:25.837026  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:48:25.853991  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 20:48:25.854053  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:48:25.872348  103439 provision.go:87] duration metric: took 376.642239ms to configureAuth
	I1002 20:48:25.872378  103439 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:48:25.872536  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:25.872653  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:25.891454  103439 main.go:141] libmachine: Using SSH client type: native
	I1002 20:48:25.891685  103439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:48:25.891706  103439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:48:26.156804  103439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:48:26.156829  103439 machine.go:96] duration metric: took 1.152858016s to provisionDockerMachine
	I1002 20:48:26.156858  103439 start.go:293] postStartSetup for "functional-012915" (driver="docker")
	I1002 20:48:26.156868  103439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:48:26.156920  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:48:26.156969  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.176188  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.278892  103439 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:48:26.282350  103439 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1002 20:48:26.282380  103439 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1002 20:48:26.282385  103439 command_runner.go:130] > VERSION_ID="12"
	I1002 20:48:26.282389  103439 command_runner.go:130] > VERSION="12 (bookworm)"
	I1002 20:48:26.282393  103439 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1002 20:48:26.282397  103439 command_runner.go:130] > ID=debian
	I1002 20:48:26.282401  103439 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1002 20:48:26.282406  103439 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1002 20:48:26.282410  103439 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1002 20:48:26.282454  103439 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:48:26.282471  103439 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:48:26.282480  103439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 20:48:26.282532  103439 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 20:48:26.282613  103439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 20:48:26.282622  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 20:48:26.282689  103439 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> hosts in /etc/test/nested/copy/84100
	I1002 20:48:26.282696  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> /etc/test/nested/copy/84100/hosts
	I1002 20:48:26.282728  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/84100
	I1002 20:48:26.291027  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:48:26.308674  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts --> /etc/test/nested/copy/84100/hosts (40 bytes)
	I1002 20:48:26.325806  103439 start.go:296] duration metric: took 168.930408ms for postStartSetup
	I1002 20:48:26.325916  103439 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:48:26.325957  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.343664  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.443702  103439 command_runner.go:130] > 54%
	I1002 20:48:26.443812  103439 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:48:26.449039  103439 command_runner.go:130] > 135G
	I1002 20:48:26.449077  103439 fix.go:56] duration metric: took 1.464076482s for fixHost
	I1002 20:48:26.449092  103439 start.go:83] releasing machines lock for "functional-012915", held for 1.464107586s
	I1002 20:48:26.449173  103439 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:48:26.467196  103439 ssh_runner.go:195] Run: cat /version.json
	I1002 20:48:26.467258  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.467342  103439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:48:26.467420  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:26.485438  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.485701  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:26.633417  103439 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1002 20:48:26.635353  103439 command_runner.go:130] > {"iso_version": "v1.37.0-1758198818-20370", "kicbase_version": "v0.0.48-1759382731-21643", "minikube_version": "v1.37.0", "commit": "b0c70dd4d342e6443a02916e52d246d8cdb181c4"}
	I1002 20:48:26.635549  103439 ssh_runner.go:195] Run: systemctl --version
	I1002 20:48:26.642439  103439 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1002 20:48:26.642484  103439 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1002 20:48:26.642544  103439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:48:26.678549  103439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1002 20:48:26.683206  103439 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1002 20:48:26.683277  103439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:48:26.683333  103439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:48:26.691349  103439 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:48:26.691374  103439 start.go:495] detecting cgroup driver to use...
	I1002 20:48:26.691404  103439 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:48:26.691448  103439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:48:26.705612  103439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:48:26.718317  103439 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:48:26.718372  103439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:48:26.732790  103439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:48:26.745127  103439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:48:26.830208  103439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:48:26.916089  103439 docker.go:234] disabling docker service ...
	I1002 20:48:26.916158  103439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:48:26.931041  103439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:48:26.944314  103439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:48:27.029050  103439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:48:27.113127  103439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:48:27.125650  103439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:48:27.138813  103439 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1002 20:48:27.139624  103439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:48:27.139683  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.148622  103439 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:48:27.148678  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.157772  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.166537  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.175276  103439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:48:27.183311  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.192091  103439 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.200250  103439 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.208827  103439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:48:27.216057  103439 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1002 20:48:27.216134  103439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:48:27.223341  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:27.309631  103439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:48:27.427286  103439 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:48:27.427366  103439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:48:27.431839  103439 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1002 20:48:27.431866  103439 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1002 20:48:27.431885  103439 command_runner.go:130] > Device: 0,59	Inode: 3822        Links: 1
	I1002 20:48:27.431892  103439 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:48:27.431897  103439 command_runner.go:130] > Access: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431903  103439 command_runner.go:130] > Modify: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431907  103439 command_runner.go:130] > Change: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431912  103439 command_runner.go:130] >  Birth: 2025-10-02 20:48:27.408797776 +0000
	I1002 20:48:27.431962  103439 start.go:563] Will wait 60s for crictl version
	I1002 20:48:27.432014  103439 ssh_runner.go:195] Run: which crictl
	I1002 20:48:27.435939  103439 command_runner.go:130] > /usr/local/bin/crictl
	I1002 20:48:27.436036  103439 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:48:27.458416  103439 command_runner.go:130] > Version:  0.1.0
	I1002 20:48:27.458438  103439 command_runner.go:130] > RuntimeName:  cri-o
	I1002 20:48:27.458443  103439 command_runner.go:130] > RuntimeVersion:  1.34.1
	I1002 20:48:27.458448  103439 command_runner.go:130] > RuntimeApiVersion:  v1
	I1002 20:48:27.460155  103439 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:48:27.460222  103439 ssh_runner.go:195] Run: crio --version
	I1002 20:48:27.486159  103439 command_runner.go:130] > crio version 1.34.1
	I1002 20:48:27.486183  103439 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:48:27.486190  103439 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:48:27.486198  103439 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:48:27.486205  103439 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:48:27.486212  103439 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:48:27.486219  103439 command_runner.go:130] >    Compiler:       gc
	I1002 20:48:27.486226  103439 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:48:27.486237  103439 command_runner.go:130] >    Linkmode:       static
	I1002 20:48:27.486246  103439 command_runner.go:130] >    BuildTags:
	I1002 20:48:27.486251  103439 command_runner.go:130] >      static
	I1002 20:48:27.486259  103439 command_runner.go:130] >      netgo
	I1002 20:48:27.486263  103439 command_runner.go:130] >      osusergo
	I1002 20:48:27.486266  103439 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:48:27.486272  103439 command_runner.go:130] >      seccomp
	I1002 20:48:27.486276  103439 command_runner.go:130] >      apparmor
	I1002 20:48:27.486300  103439 command_runner.go:130] >      selinux
	I1002 20:48:27.486312  103439 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:48:27.486330  103439 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:48:27.486339  103439 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:48:27.487532  103439 ssh_runner.go:195] Run: crio --version
	I1002 20:48:27.514593  103439 command_runner.go:130] > crio version 1.34.1
	I1002 20:48:27.514624  103439 command_runner.go:130] >    GitCommit:      8e14bff4153ba033f12ed3ffa3cadaca5425b313
	I1002 20:48:27.514630  103439 command_runner.go:130] >    GitCommitDate:  2025-10-01T13:04:13Z
	I1002 20:48:27.514634  103439 command_runner.go:130] >    GitTreeState:   dirty
	I1002 20:48:27.514639  103439 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1002 20:48:27.514643  103439 command_runner.go:130] >    GoVersion:      go1.24.6
	I1002 20:48:27.514647  103439 command_runner.go:130] >    Compiler:       gc
	I1002 20:48:27.514654  103439 command_runner.go:130] >    Platform:       linux/amd64
	I1002 20:48:27.514658  103439 command_runner.go:130] >    Linkmode:       static
	I1002 20:48:27.514662  103439 command_runner.go:130] >    BuildTags:
	I1002 20:48:27.514665  103439 command_runner.go:130] >      static
	I1002 20:48:27.514668  103439 command_runner.go:130] >      netgo
	I1002 20:48:27.514677  103439 command_runner.go:130] >      osusergo
	I1002 20:48:27.514685  103439 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1002 20:48:27.514688  103439 command_runner.go:130] >      seccomp
	I1002 20:48:27.514691  103439 command_runner.go:130] >      apparmor
	I1002 20:48:27.514695  103439 command_runner.go:130] >      selinux
	I1002 20:48:27.514699  103439 command_runner.go:130] >    LDFlags:          unknown
	I1002 20:48:27.514706  103439 command_runner.go:130] >    SeccompEnabled:   true
	I1002 20:48:27.514709  103439 command_runner.go:130] >    AppArmorEnabled:  false
	I1002 20:48:27.516768  103439 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:48:27.518063  103439 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:48:27.535001  103439 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:48:27.539645  103439 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1002 20:48:27.539759  103439 kubeadm.go:883] updating cluster {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:48:27.539875  103439 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:48:27.539928  103439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:48:27.571471  103439 command_runner.go:130] > {
	I1002 20:48:27.571489  103439 command_runner.go:130] >   "images":  [
	I1002 20:48:27.571493  103439 command_runner.go:130] >     {
	I1002 20:48:27.571502  103439 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:48:27.571507  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571513  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:48:27.571516  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571520  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571528  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:48:27.571535  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:48:27.571539  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571543  103439 command_runner.go:130] >       "size":  "109379124",
	I1002 20:48:27.571547  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571554  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571560  103439 command_runner.go:130] >     },
	I1002 20:48:27.571568  103439 command_runner.go:130] >     {
	I1002 20:48:27.571574  103439 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:48:27.571577  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571583  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:48:27.571588  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571592  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571600  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:48:27.571610  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:48:27.571616  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571620  103439 command_runner.go:130] >       "size":  "31470524",
	I1002 20:48:27.571626  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571633  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571644  103439 command_runner.go:130] >     },
	I1002 20:48:27.571650  103439 command_runner.go:130] >     {
	I1002 20:48:27.571656  103439 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:48:27.571662  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571667  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:48:27.571672  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571676  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571685  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:48:27.571694  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:48:27.571700  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571704  103439 command_runner.go:130] >       "size":  "76103547",
	I1002 20:48:27.571710  103439 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:48:27.571714  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571719  103439 command_runner.go:130] >     },
	I1002 20:48:27.571721  103439 command_runner.go:130] >     {
	I1002 20:48:27.571727  103439 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:48:27.571733  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571752  103439 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:48:27.571758  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571767  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571778  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:48:27.571787  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:48:27.571792  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571796  103439 command_runner.go:130] >       "size":  "195976448",
	I1002 20:48:27.571802  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571805  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.571810  103439 command_runner.go:130] >       },
	I1002 20:48:27.571824  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571831  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571834  103439 command_runner.go:130] >     },
	I1002 20:48:27.571838  103439 command_runner.go:130] >     {
	I1002 20:48:27.571844  103439 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:48:27.571850  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571859  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:48:27.571866  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571870  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571879  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:48:27.571888  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:48:27.571894  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571898  103439 command_runner.go:130] >       "size":  "89046001",
	I1002 20:48:27.571903  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571907  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.571913  103439 command_runner.go:130] >       },
	I1002 20:48:27.571916  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.571922  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.571925  103439 command_runner.go:130] >     },
	I1002 20:48:27.571931  103439 command_runner.go:130] >     {
	I1002 20:48:27.571937  103439 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:48:27.571943  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.571948  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:48:27.571953  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571957  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.571967  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:48:27.571976  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:48:27.571981  103439 command_runner.go:130] >       ],
	I1002 20:48:27.571985  103439 command_runner.go:130] >       "size":  "76004181",
	I1002 20:48:27.571991  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.571994  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.572000  103439 command_runner.go:130] >       },
	I1002 20:48:27.572003  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572009  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572012  103439 command_runner.go:130] >     },
	I1002 20:48:27.572015  103439 command_runner.go:130] >     {
	I1002 20:48:27.572023  103439 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:48:27.572027  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572038  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:48:27.572048  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572054  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572061  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:48:27.572070  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:48:27.572076  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572080  103439 command_runner.go:130] >       "size":  "73138073",
	I1002 20:48:27.572085  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572089  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572095  103439 command_runner.go:130] >     },
	I1002 20:48:27.572098  103439 command_runner.go:130] >     {
	I1002 20:48:27.572106  103439 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:48:27.572109  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572114  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:48:27.572119  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572123  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572132  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:48:27.572157  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:48:27.572163  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572167  103439 command_runner.go:130] >       "size":  "53844823",
	I1002 20:48:27.572172  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.572175  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.572180  103439 command_runner.go:130] >       },
	I1002 20:48:27.572184  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572189  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.572192  103439 command_runner.go:130] >     },
	I1002 20:48:27.572197  103439 command_runner.go:130] >     {
	I1002 20:48:27.572203  103439 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:48:27.572206  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.572213  103439 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.572217  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572222  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.572229  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:48:27.572237  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:48:27.572248  103439 command_runner.go:130] >       ],
	I1002 20:48:27.572254  103439 command_runner.go:130] >       "size":  "742092",
	I1002 20:48:27.572258  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.572263  103439 command_runner.go:130] >         "value":  "65535"
	I1002 20:48:27.572267  103439 command_runner.go:130] >       },
	I1002 20:48:27.572273  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.572282  103439 command_runner.go:130] >       "pinned":  true
	I1002 20:48:27.572288  103439 command_runner.go:130] >     }
	I1002 20:48:27.572291  103439 command_runner.go:130] >   ]
	I1002 20:48:27.572295  103439 command_runner.go:130] > }
	I1002 20:48:27.573606  103439 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:48:27.573628  103439 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:48:27.573687  103439 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:48:27.599395  103439 command_runner.go:130] > {
	I1002 20:48:27.599418  103439 command_runner.go:130] >   "images":  [
	I1002 20:48:27.599424  103439 command_runner.go:130] >     {
	I1002 20:48:27.599434  103439 command_runner.go:130] >       "id":  "409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c",
	I1002 20:48:27.599439  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599447  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1002 20:48:27.599452  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599460  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599473  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1002 20:48:27.599500  103439 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"
	I1002 20:48:27.599510  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599518  103439 command_runner.go:130] >       "size":  "109379124",
	I1002 20:48:27.599526  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599540  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599549  103439 command_runner.go:130] >     },
	I1002 20:48:27.599555  103439 command_runner.go:130] >     {
	I1002 20:48:27.599575  103439 command_runner.go:130] >       "id":  "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1002 20:48:27.599582  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599590  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1002 20:48:27.599596  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599604  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599624  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1002 20:48:27.599640  103439 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1002 20:48:27.599648  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599656  103439 command_runner.go:130] >       "size":  "31470524",
	I1002 20:48:27.599664  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599676  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599684  103439 command_runner.go:130] >     },
	I1002 20:48:27.599690  103439 command_runner.go:130] >     {
	I1002 20:48:27.599703  103439 command_runner.go:130] >       "id":  "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969",
	I1002 20:48:27.599713  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599722  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.12.1"
	I1002 20:48:27.599730  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599754  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599770  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998",
	I1002 20:48:27.599783  103439 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"
	I1002 20:48:27.599791  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599798  103439 command_runner.go:130] >       "size":  "76103547",
	I1002 20:48:27.599808  103439 command_runner.go:130] >       "username":  "nonroot",
	I1002 20:48:27.599815  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599823  103439 command_runner.go:130] >     },
	I1002 20:48:27.599829  103439 command_runner.go:130] >     {
	I1002 20:48:27.599840  103439 command_runner.go:130] >       "id":  "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115",
	I1002 20:48:27.599849  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.599858  103439 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.4-0"
	I1002 20:48:27.599865  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599873  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.599887  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f",
	I1002 20:48:27.599901  103439 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"
	I1002 20:48:27.599918  103439 command_runner.go:130] >       ],
	I1002 20:48:27.599927  103439 command_runner.go:130] >       "size":  "195976448",
	I1002 20:48:27.599934  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.599942  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.599948  103439 command_runner.go:130] >       },
	I1002 20:48:27.599974  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.599984  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.599989  103439 command_runner.go:130] >     },
	I1002 20:48:27.599994  103439 command_runner.go:130] >     {
	I1002 20:48:27.600004  103439 command_runner.go:130] >       "id":  "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97",
	I1002 20:48:27.600013  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600021  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.34.1"
	I1002 20:48:27.600029  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600036  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600050  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964",
	I1002 20:48:27.600065  103439 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"
	I1002 20:48:27.600073  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600080  103439 command_runner.go:130] >       "size":  "89046001",
	I1002 20:48:27.600089  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600103  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600112  103439 command_runner.go:130] >       },
	I1002 20:48:27.600119  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600128  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600134  103439 command_runner.go:130] >     },
	I1002 20:48:27.600142  103439 command_runner.go:130] >     {
	I1002 20:48:27.600152  103439 command_runner.go:130] >       "id":  "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f",
	I1002 20:48:27.600161  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600171  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.34.1"
	I1002 20:48:27.600179  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600185  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600199  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89",
	I1002 20:48:27.600213  103439 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"
	I1002 20:48:27.600220  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600233  103439 command_runner.go:130] >       "size":  "76004181",
	I1002 20:48:27.600242  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600250  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600258  103439 command_runner.go:130] >       },
	I1002 20:48:27.600264  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600273  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600278  103439 command_runner.go:130] >     },
	I1002 20:48:27.600284  103439 command_runner.go:130] >     {
	I1002 20:48:27.600297  103439 command_runner.go:130] >       "id":  "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7",
	I1002 20:48:27.600306  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600315  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.34.1"
	I1002 20:48:27.600332  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600339  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600354  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a",
	I1002 20:48:27.600368  103439 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"
	I1002 20:48:27.600376  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600383  103439 command_runner.go:130] >       "size":  "73138073",
	I1002 20:48:27.600393  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600401  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600410  103439 command_runner.go:130] >     },
	I1002 20:48:27.600415  103439 command_runner.go:130] >     {
	I1002 20:48:27.600423  103439 command_runner.go:130] >       "id":  "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813",
	I1002 20:48:27.600428  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600437  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.34.1"
	I1002 20:48:27.600446  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600452  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600464  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31",
	I1002 20:48:27.600497  103439 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"
	I1002 20:48:27.600505  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600513  103439 command_runner.go:130] >       "size":  "53844823",
	I1002 20:48:27.600520  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600527  103439 command_runner.go:130] >         "value":  "0"
	I1002 20:48:27.600536  103439 command_runner.go:130] >       },
	I1002 20:48:27.600554  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600563  103439 command_runner.go:130] >       "pinned":  false
	I1002 20:48:27.600569  103439 command_runner.go:130] >     },
	I1002 20:48:27.600574  103439 command_runner.go:130] >     {
	I1002 20:48:27.600585  103439 command_runner.go:130] >       "id":  "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f",
	I1002 20:48:27.600594  103439 command_runner.go:130] >       "repoTags":  [
	I1002 20:48:27.600603  103439 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.600611  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600618  103439 command_runner.go:130] >       "repoDigests":  [
	I1002 20:48:27.600631  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1002 20:48:27.600643  103439 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"
	I1002 20:48:27.600652  103439 command_runner.go:130] >       ],
	I1002 20:48:27.600659  103439 command_runner.go:130] >       "size":  "742092",
	I1002 20:48:27.600668  103439 command_runner.go:130] >       "uid":  {
	I1002 20:48:27.600676  103439 command_runner.go:130] >         "value":  "65535"
	I1002 20:48:27.600684  103439 command_runner.go:130] >       },
	I1002 20:48:27.600692  103439 command_runner.go:130] >       "username":  "",
	I1002 20:48:27.600701  103439 command_runner.go:130] >       "pinned":  true
	I1002 20:48:27.600708  103439 command_runner.go:130] >     }
	I1002 20:48:27.600716  103439 command_runner.go:130] >   ]
	I1002 20:48:27.600721  103439 command_runner.go:130] > }
	I1002 20:48:27.600844  103439 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:48:27.600859  103439 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:48:27.600868  103439 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:48:27.600982  103439 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:48:27.601057  103439 ssh_runner.go:195] Run: crio config
	I1002 20:48:27.642390  103439 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1002 20:48:27.642423  103439 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1002 20:48:27.642435  103439 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1002 20:48:27.642439  103439 command_runner.go:130] > #
	I1002 20:48:27.642450  103439 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1002 20:48:27.642460  103439 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1002 20:48:27.642470  103439 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1002 20:48:27.642501  103439 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1002 20:48:27.642510  103439 command_runner.go:130] > # reload'.
	I1002 20:48:27.642520  103439 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1002 20:48:27.642532  103439 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1002 20:48:27.642543  103439 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1002 20:48:27.642558  103439 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1002 20:48:27.642563  103439 command_runner.go:130] > [crio]
	I1002 20:48:27.642572  103439 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1002 20:48:27.642580  103439 command_runner.go:130] > # containers images, in this directory.
	I1002 20:48:27.642602  103439 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1002 20:48:27.642618  103439 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1002 20:48:27.642627  103439 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1002 20:48:27.642637  103439 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1002 20:48:27.642643  103439 command_runner.go:130] > # imagestore = ""
	I1002 20:48:27.642656  103439 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1002 20:48:27.642670  103439 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1002 20:48:27.642681  103439 command_runner.go:130] > # storage_driver = "overlay"
	I1002 20:48:27.642691  103439 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1002 20:48:27.642708  103439 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1002 20:48:27.642715  103439 command_runner.go:130] > # storage_option = [
	I1002 20:48:27.642723  103439 command_runner.go:130] > # ]
	I1002 20:48:27.642733  103439 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1002 20:48:27.642762  103439 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1002 20:48:27.642770  103439 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1002 20:48:27.642783  103439 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1002 20:48:27.642796  103439 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1002 20:48:27.642804  103439 command_runner.go:130] > # always happen on a node reboot
	I1002 20:48:27.642814  103439 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1002 20:48:27.642844  103439 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1002 20:48:27.642859  103439 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1002 20:48:27.642869  103439 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1002 20:48:27.642883  103439 command_runner.go:130] > # version_file_persist = ""
	I1002 20:48:27.642895  103439 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1002 20:48:27.642919  103439 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1002 20:48:27.642930  103439 command_runner.go:130] > # internal_wipe = true
	I1002 20:48:27.642942  103439 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1002 20:48:27.642957  103439 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1002 20:48:27.642963  103439 command_runner.go:130] > # internal_repair = true
	I1002 20:48:27.642972  103439 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1002 20:48:27.642981  103439 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1002 20:48:27.642990  103439 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1002 20:48:27.642998  103439 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1002 20:48:27.643012  103439 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1002 20:48:27.643018  103439 command_runner.go:130] > [crio.api]
	I1002 20:48:27.643028  103439 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1002 20:48:27.643038  103439 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1002 20:48:27.643047  103439 command_runner.go:130] > # IP address on which the stream server will listen.
	I1002 20:48:27.643058  103439 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1002 20:48:27.643068  103439 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1002 20:48:27.643081  103439 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1002 20:48:27.643088  103439 command_runner.go:130] > # stream_port = "0"
	I1002 20:48:27.643100  103439 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1002 20:48:27.643107  103439 command_runner.go:130] > # stream_enable_tls = false
	I1002 20:48:27.643117  103439 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1002 20:48:27.643126  103439 command_runner.go:130] > # stream_idle_timeout = ""
	I1002 20:48:27.643137  103439 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1002 20:48:27.643149  103439 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1002 20:48:27.643154  103439 command_runner.go:130] > # stream_tls_cert = ""
	I1002 20:48:27.643169  103439 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1002 20:48:27.643178  103439 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1002 20:48:27.643188  103439 command_runner.go:130] > # stream_tls_key = ""
	I1002 20:48:27.643205  103439 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1002 20:48:27.643218  103439 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1002 20:48:27.643228  103439 command_runner.go:130] > # automatically pick up the changes.
	I1002 20:48:27.643241  103439 command_runner.go:130] > # stream_tls_ca = ""
	I1002 20:48:27.643279  103439 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:48:27.643300  103439 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1002 20:48:27.643322  103439 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1002 20:48:27.643333  103439 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1002 20:48:27.643343  103439 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1002 20:48:27.643352  103439 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1002 20:48:27.643370  103439 command_runner.go:130] > [crio.runtime]
	I1002 20:48:27.643381  103439 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1002 20:48:27.643393  103439 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1002 20:48:27.643403  103439 command_runner.go:130] > # "nofile=1024:2048"
	I1002 20:48:27.643414  103439 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1002 20:48:27.643423  103439 command_runner.go:130] > # default_ulimits = [
	I1002 20:48:27.643428  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643441  103439 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1002 20:48:27.643450  103439 command_runner.go:130] > # no_pivot = false
	I1002 20:48:27.643460  103439 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1002 20:48:27.643473  103439 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1002 20:48:27.643482  103439 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1002 20:48:27.643494  103439 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1002 20:48:27.643511  103439 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1002 20:48:27.643524  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:48:27.643532  103439 command_runner.go:130] > # conmon = ""
	I1002 20:48:27.643539  103439 command_runner.go:130] > # Cgroup setting for conmon
	I1002 20:48:27.643549  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1002 20:48:27.643556  103439 command_runner.go:130] > conmon_cgroup = "pod"
	I1002 20:48:27.643565  103439 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1002 20:48:27.643572  103439 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1002 20:48:27.643582  103439 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1002 20:48:27.643588  103439 command_runner.go:130] > # conmon_env = [
	I1002 20:48:27.643592  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643600  103439 command_runner.go:130] > # Additional environment variables to set for all the
	I1002 20:48:27.643612  103439 command_runner.go:130] > # containers. These are overridden if set in the
	I1002 20:48:27.643622  103439 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1002 20:48:27.643631  103439 command_runner.go:130] > # default_env = [
	I1002 20:48:27.643647  103439 command_runner.go:130] > # ]
	I1002 20:48:27.643661  103439 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1002 20:48:27.643672  103439 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1002 20:48:27.643679  103439 command_runner.go:130] > # selinux = false
	I1002 20:48:27.643689  103439 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1002 20:48:27.643701  103439 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1002 20:48:27.643710  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643717  103439 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:48:27.643729  103439 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1002 20:48:27.643755  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643766  103439 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1002 20:48:27.643777  103439 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1002 20:48:27.643790  103439 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1002 20:48:27.643804  103439 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1002 20:48:27.643815  103439 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1002 20:48:27.643826  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643834  103439 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1002 20:48:27.643847  103439 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1002 20:48:27.643856  103439 command_runner.go:130] > # the cgroup blockio controller.
	I1002 20:48:27.643863  103439 command_runner.go:130] > # blockio_config_file = ""
	I1002 20:48:27.643875  103439 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1002 20:48:27.643886  103439 command_runner.go:130] > # blockio parameters.
	I1002 20:48:27.643892  103439 command_runner.go:130] > # blockio_reload = false
	I1002 20:48:27.643901  103439 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1002 20:48:27.643907  103439 command_runner.go:130] > # irqbalance daemon.
	I1002 20:48:27.643914  103439 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1002 20:48:27.643922  103439 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1002 20:48:27.643930  103439 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1002 20:48:27.643939  103439 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1002 20:48:27.643946  103439 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1002 20:48:27.643955  103439 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1002 20:48:27.643967  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.643976  103439 command_runner.go:130] > # rdt_config_file = ""
	I1002 20:48:27.643991  103439 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1002 20:48:27.643998  103439 command_runner.go:130] > # cgroup_manager = "systemd"
	I1002 20:48:27.644004  103439 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1002 20:48:27.644010  103439 command_runner.go:130] > # separate_pull_cgroup = ""
	I1002 20:48:27.644016  103439 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1002 20:48:27.644022  103439 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1002 20:48:27.644026  103439 command_runner.go:130] > # will be added.
	I1002 20:48:27.644030  103439 command_runner.go:130] > # default_capabilities = [
	I1002 20:48:27.644036  103439 command_runner.go:130] > # 	"CHOWN",
	I1002 20:48:27.644039  103439 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1002 20:48:27.644042  103439 command_runner.go:130] > # 	"FSETID",
	I1002 20:48:27.644046  103439 command_runner.go:130] > # 	"FOWNER",
	I1002 20:48:27.644049  103439 command_runner.go:130] > # 	"SETGID",
	I1002 20:48:27.644077  103439 command_runner.go:130] > # 	"SETUID",
	I1002 20:48:27.644089  103439 command_runner.go:130] > # 	"SETPCAP",
	I1002 20:48:27.644096  103439 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1002 20:48:27.644099  103439 command_runner.go:130] > # 	"KILL",
	I1002 20:48:27.644102  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644111  103439 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1002 20:48:27.644117  103439 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1002 20:48:27.644124  103439 command_runner.go:130] > # add_inheritable_capabilities = false
	I1002 20:48:27.644129  103439 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1002 20:48:27.644137  103439 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:48:27.644140  103439 command_runner.go:130] > default_sysctls = [
	I1002 20:48:27.644146  103439 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1002 20:48:27.644149  103439 command_runner.go:130] > ]
	I1002 20:48:27.644153  103439 command_runner.go:130] > # List of devices on the host that a
	I1002 20:48:27.644159  103439 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1002 20:48:27.644165  103439 command_runner.go:130] > # allowed_devices = [
	I1002 20:48:27.644168  103439 command_runner.go:130] > # 	"/dev/fuse",
	I1002 20:48:27.644172  103439 command_runner.go:130] > # 	"/dev/net/tun",
	I1002 20:48:27.644177  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644181  103439 command_runner.go:130] > # List of additional devices. specified as
	I1002 20:48:27.644194  103439 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1002 20:48:27.644201  103439 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1002 20:48:27.644207  103439 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1002 20:48:27.644210  103439 command_runner.go:130] > # additional_devices = [
	I1002 20:48:27.644213  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644218  103439 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1002 20:48:27.644224  103439 command_runner.go:130] > # cdi_spec_dirs = [
	I1002 20:48:27.644227  103439 command_runner.go:130] > # 	"/etc/cdi",
	I1002 20:48:27.644231  103439 command_runner.go:130] > # 	"/var/run/cdi",
	I1002 20:48:27.644235  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644241  103439 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1002 20:48:27.644249  103439 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1002 20:48:27.644253  103439 command_runner.go:130] > # Defaults to false.
	I1002 20:48:27.644259  103439 command_runner.go:130] > # device_ownership_from_security_context = false
	I1002 20:48:27.644265  103439 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1002 20:48:27.644272  103439 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1002 20:48:27.644275  103439 command_runner.go:130] > # hooks_dir = [
	I1002 20:48:27.644280  103439 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1002 20:48:27.644283  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644289  103439 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1002 20:48:27.644297  103439 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1002 20:48:27.644302  103439 command_runner.go:130] > # its default mounts from the following two files:
	I1002 20:48:27.644305  103439 command_runner.go:130] > #
	I1002 20:48:27.644310  103439 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1002 20:48:27.644323  103439 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1002 20:48:27.644329  103439 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1002 20:48:27.644334  103439 command_runner.go:130] > #
	I1002 20:48:27.644340  103439 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1002 20:48:27.644346  103439 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1002 20:48:27.644352  103439 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1002 20:48:27.644356  103439 command_runner.go:130] > #      only add mounts it finds in this file.
	I1002 20:48:27.644359  103439 command_runner.go:130] > #
	I1002 20:48:27.644363  103439 command_runner.go:130] > # default_mounts_file = ""
	I1002 20:48:27.644377  103439 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1002 20:48:27.644385  103439 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1002 20:48:27.644389  103439 command_runner.go:130] > # pids_limit = -1
	I1002 20:48:27.644397  103439 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1002 20:48:27.644403  103439 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1002 20:48:27.644409  103439 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1002 20:48:27.644418  103439 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1002 20:48:27.644422  103439 command_runner.go:130] > # log_size_max = -1
	I1002 20:48:27.644430  103439 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1002 20:48:27.644434  103439 command_runner.go:130] > # log_to_journald = false
	I1002 20:48:27.644439  103439 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1002 20:48:27.644444  103439 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1002 20:48:27.644450  103439 command_runner.go:130] > # Path to directory for container attach sockets.
	I1002 20:48:27.644454  103439 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1002 20:48:27.644461  103439 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1002 20:48:27.644465  103439 command_runner.go:130] > # bind_mount_prefix = ""
	I1002 20:48:27.644470  103439 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1002 20:48:27.644473  103439 command_runner.go:130] > # read_only = false
	I1002 20:48:27.644482  103439 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1002 20:48:27.644490  103439 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1002 20:48:27.644494  103439 command_runner.go:130] > # live configuration reload.
	I1002 20:48:27.644500  103439 command_runner.go:130] > # log_level = "info"
	I1002 20:48:27.644505  103439 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1002 20:48:27.644509  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.644512  103439 command_runner.go:130] > # log_filter = ""
	I1002 20:48:27.644518  103439 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1002 20:48:27.644525  103439 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1002 20:48:27.644529  103439 command_runner.go:130] > # separated by comma.
	I1002 20:48:27.644536  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644542  103439 command_runner.go:130] > # uid_mappings = ""
	I1002 20:48:27.644547  103439 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1002 20:48:27.644552  103439 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1002 20:48:27.644559  103439 command_runner.go:130] > # separated by comma.
	I1002 20:48:27.644573  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644579  103439 command_runner.go:130] > # gid_mappings = ""
	I1002 20:48:27.644585  103439 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1002 20:48:27.644591  103439 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:48:27.644598  103439 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:48:27.644606  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644611  103439 command_runner.go:130] > # minimum_mappable_uid = -1
	I1002 20:48:27.644617  103439 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1002 20:48:27.644625  103439 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1002 20:48:27.644631  103439 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1002 20:48:27.644640  103439 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1002 20:48:27.644644  103439 command_runner.go:130] > # minimum_mappable_gid = -1
	I1002 20:48:27.644652  103439 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1002 20:48:27.644657  103439 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1002 20:48:27.644665  103439 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1002 20:48:27.644668  103439 command_runner.go:130] > # ctr_stop_timeout = 30
	I1002 20:48:27.644673  103439 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1002 20:48:27.644679  103439 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1002 20:48:27.644686  103439 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1002 20:48:27.644690  103439 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1002 20:48:27.644693  103439 command_runner.go:130] > # drop_infra_ctr = true
	I1002 20:48:27.644699  103439 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1002 20:48:27.644706  103439 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1002 20:48:27.644712  103439 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1002 20:48:27.644718  103439 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1002 20:48:27.644726  103439 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1002 20:48:27.644733  103439 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1002 20:48:27.644752  103439 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1002 20:48:27.644764  103439 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1002 20:48:27.644769  103439 command_runner.go:130] > # shared_cpuset = ""
	I1002 20:48:27.644777  103439 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1002 20:48:27.644782  103439 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1002 20:48:27.644785  103439 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1002 20:48:27.644798  103439 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1002 20:48:27.644804  103439 command_runner.go:130] > # pinns_path = ""
	I1002 20:48:27.644810  103439 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1002 20:48:27.644817  103439 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1002 20:48:27.644821  103439 command_runner.go:130] > # enable_criu_support = true
	I1002 20:48:27.644826  103439 command_runner.go:130] > # Enable/disable the generation of the container,
	I1002 20:48:27.644831  103439 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1002 20:48:27.644837  103439 command_runner.go:130] > # enable_pod_events = false
	I1002 20:48:27.644842  103439 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1002 20:48:27.644849  103439 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1002 20:48:27.644853  103439 command_runner.go:130] > # default_runtime = "crun"
	I1002 20:48:27.644858  103439 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1002 20:48:27.644867  103439 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1002 20:48:27.644876  103439 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1002 20:48:27.644882  103439 command_runner.go:130] > # creation as a file is not desired either.
	I1002 20:48:27.644890  103439 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1002 20:48:27.644896  103439 command_runner.go:130] > # the hostname is being managed dynamically.
	I1002 20:48:27.644900  103439 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1002 20:48:27.644905  103439 command_runner.go:130] > # ]
	I1002 20:48:27.644911  103439 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1002 20:48:27.644919  103439 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1002 20:48:27.644925  103439 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1002 20:48:27.644930  103439 command_runner.go:130] > # Each entry in the table should follow the format:
	I1002 20:48:27.644932  103439 command_runner.go:130] > #
	I1002 20:48:27.644937  103439 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1002 20:48:27.644943  103439 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1002 20:48:27.644947  103439 command_runner.go:130] > # runtime_type = "oci"
	I1002 20:48:27.644951  103439 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1002 20:48:27.644955  103439 command_runner.go:130] > # inherit_default_runtime = false
	I1002 20:48:27.644959  103439 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1002 20:48:27.644963  103439 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1002 20:48:27.644968  103439 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1002 20:48:27.644972  103439 command_runner.go:130] > # monitor_env = []
	I1002 20:48:27.644980  103439 command_runner.go:130] > # privileged_without_host_devices = false
	I1002 20:48:27.644987  103439 command_runner.go:130] > # allowed_annotations = []
	I1002 20:48:27.644992  103439 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1002 20:48:27.644998  103439 command_runner.go:130] > # no_sync_log = false
	I1002 20:48:27.645001  103439 command_runner.go:130] > # default_annotations = {}
	I1002 20:48:27.645007  103439 command_runner.go:130] > # stream_websockets = false
	I1002 20:48:27.645011  103439 command_runner.go:130] > # seccomp_profile = ""
	I1002 20:48:27.645086  103439 command_runner.go:130] > # Where:
	I1002 20:48:27.645099  103439 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1002 20:48:27.645104  103439 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1002 20:48:27.645110  103439 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1002 20:48:27.645115  103439 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1002 20:48:27.645119  103439 command_runner.go:130] > #   in $PATH.
	I1002 20:48:27.645124  103439 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1002 20:48:27.645131  103439 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1002 20:48:27.645137  103439 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1002 20:48:27.645142  103439 command_runner.go:130] > #   state.
	I1002 20:48:27.645148  103439 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1002 20:48:27.645156  103439 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1002 20:48:27.645161  103439 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1002 20:48:27.645173  103439 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1002 20:48:27.645180  103439 command_runner.go:130] > #   the values from the default runtime on load time.
	I1002 20:48:27.645186  103439 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1002 20:48:27.645191  103439 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1002 20:48:27.645197  103439 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1002 20:48:27.645205  103439 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1002 20:48:27.645216  103439 command_runner.go:130] > #   The currently recognized values are:
	I1002 20:48:27.645224  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1002 20:48:27.645231  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1002 20:48:27.645239  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1002 20:48:27.645245  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1002 20:48:27.645254  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1002 20:48:27.645259  103439 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1002 20:48:27.645276  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1002 20:48:27.645284  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1002 20:48:27.645296  103439 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1002 20:48:27.645301  103439 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1002 20:48:27.645309  103439 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1002 20:48:27.645320  103439 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1002 20:48:27.645327  103439 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1002 20:48:27.645333  103439 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1002 20:48:27.645341  103439 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1002 20:48:27.645348  103439 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1002 20:48:27.645355  103439 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1002 20:48:27.645360  103439 command_runner.go:130] > #   deprecated option "conmon".
	I1002 20:48:27.645368  103439 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1002 20:48:27.645373  103439 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1002 20:48:27.645381  103439 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1002 20:48:27.645385  103439 command_runner.go:130] > #   should be moved to the container's cgroup
	I1002 20:48:27.645392  103439 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1002 20:48:27.645398  103439 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1002 20:48:27.645405  103439 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1002 20:48:27.645410  103439 command_runner.go:130] > #   conmon-rs by using:
	I1002 20:48:27.645417  103439 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1002 20:48:27.645426  103439 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1002 20:48:27.645433  103439 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1002 20:48:27.645441  103439 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1002 20:48:27.645446  103439 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1002 20:48:27.645454  103439 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1002 20:48:27.645461  103439 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1002 20:48:27.645468  103439 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1002 20:48:27.645475  103439 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1002 20:48:27.645484  103439 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1002 20:48:27.645490  103439 command_runner.go:130] > #   when a machine crash happens.
	I1002 20:48:27.645496  103439 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1002 20:48:27.645505  103439 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1002 20:48:27.645517  103439 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1002 20:48:27.645523  103439 command_runner.go:130] > #   seccomp profile for the runtime.
	I1002 20:48:27.645529  103439 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1002 20:48:27.645542  103439 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1002 20:48:27.645548  103439 command_runner.go:130] > #
	I1002 20:48:27.645552  103439 command_runner.go:130] > # Using the seccomp notifier feature:
	I1002 20:48:27.645555  103439 command_runner.go:130] > #
	I1002 20:48:27.645560  103439 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1002 20:48:27.645569  103439 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1002 20:48:27.645573  103439 command_runner.go:130] > #
	I1002 20:48:27.645578  103439 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1002 20:48:27.645586  103439 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1002 20:48:27.645589  103439 command_runner.go:130] > #
	I1002 20:48:27.645595  103439 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1002 20:48:27.645598  103439 command_runner.go:130] > # feature.
	I1002 20:48:27.645601  103439 command_runner.go:130] > #
	I1002 20:48:27.645606  103439 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1002 20:48:27.645615  103439 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1002 20:48:27.645622  103439 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1002 20:48:27.645627  103439 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1002 20:48:27.645635  103439 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1002 20:48:27.645637  103439 command_runner.go:130] > #
	I1002 20:48:27.645643  103439 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1002 20:48:27.645651  103439 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1002 20:48:27.645653  103439 command_runner.go:130] > #
	I1002 20:48:27.645662  103439 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1002 20:48:27.645672  103439 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1002 20:48:27.645676  103439 command_runner.go:130] > #
	I1002 20:48:27.645682  103439 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1002 20:48:27.645690  103439 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1002 20:48:27.645693  103439 command_runner.go:130] > # limitation.
	I1002 20:48:27.645697  103439 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1002 20:48:27.645701  103439 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1002 20:48:27.645709  103439 command_runner.go:130] > runtime_type = ""
	I1002 20:48:27.645715  103439 command_runner.go:130] > runtime_root = "/run/crun"
	I1002 20:48:27.645725  103439 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:48:27.645731  103439 command_runner.go:130] > runtime_config_path = ""
	I1002 20:48:27.645746  103439 command_runner.go:130] > container_min_memory = ""
	I1002 20:48:27.645754  103439 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:48:27.645762  103439 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:48:27.645768  103439 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:48:27.645777  103439 command_runner.go:130] > allowed_annotations = [
	I1002 20:48:27.645783  103439 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1002 20:48:27.645788  103439 command_runner.go:130] > ]
	I1002 20:48:27.645792  103439 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:48:27.645796  103439 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1002 20:48:27.645803  103439 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1002 20:48:27.645807  103439 command_runner.go:130] > runtime_type = ""
	I1002 20:48:27.645811  103439 command_runner.go:130] > runtime_root = "/run/runc"
	I1002 20:48:27.645815  103439 command_runner.go:130] > inherit_default_runtime = false
	I1002 20:48:27.645818  103439 command_runner.go:130] > runtime_config_path = ""
	I1002 20:48:27.645822  103439 command_runner.go:130] > container_min_memory = ""
	I1002 20:48:27.645826  103439 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1002 20:48:27.645830  103439 command_runner.go:130] > monitor_cgroup = "pod"
	I1002 20:48:27.645834  103439 command_runner.go:130] > monitor_exec_cgroup = ""
	I1002 20:48:27.645838  103439 command_runner.go:130] > privileged_without_host_devices = false
	I1002 20:48:27.645844  103439 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1002 20:48:27.645852  103439 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1002 20:48:27.645857  103439 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1002 20:48:27.645866  103439 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1002 20:48:27.645875  103439 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1002 20:48:27.645886  103439 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1002 20:48:27.645894  103439 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1002 20:48:27.645899  103439 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1002 20:48:27.645907  103439 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1002 20:48:27.645917  103439 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1002 20:48:27.645930  103439 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1002 20:48:27.645940  103439 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1002 20:48:27.645943  103439 command_runner.go:130] > # Example:
	I1002 20:48:27.645949  103439 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1002 20:48:27.645953  103439 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1002 20:48:27.645960  103439 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1002 20:48:27.645966  103439 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1002 20:48:27.645972  103439 command_runner.go:130] > # cpuset = "0-1"
	I1002 20:48:27.645975  103439 command_runner.go:130] > # cpushares = "5"
	I1002 20:48:27.645979  103439 command_runner.go:130] > # cpuquota = "1000"
	I1002 20:48:27.645982  103439 command_runner.go:130] > # cpuperiod = "100000"
	I1002 20:48:27.645986  103439 command_runner.go:130] > # cpulimit = "35"
	I1002 20:48:27.645989  103439 command_runner.go:130] > # Where:
	I1002 20:48:27.645993  103439 command_runner.go:130] > # The workload name is workload-type.
	I1002 20:48:27.646000  103439 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1002 20:48:27.646006  103439 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1002 20:48:27.646011  103439 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1002 20:48:27.646021  103439 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1002 20:48:27.646026  103439 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1002 20:48:27.646034  103439 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1002 20:48:27.646044  103439 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1002 20:48:27.646052  103439 command_runner.go:130] > # Default value is set to true
	I1002 20:48:27.646058  103439 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1002 20:48:27.646068  103439 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1002 20:48:27.646074  103439 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1002 20:48:27.646083  103439 command_runner.go:130] > # Default value is set to 'false'
	I1002 20:48:27.646092  103439 command_runner.go:130] > # disable_hostport_mapping = false
	I1002 20:48:27.646104  103439 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1002 20:48:27.646118  103439 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1002 20:48:27.646127  103439 command_runner.go:130] > # timezone = ""
	I1002 20:48:27.646136  103439 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1002 20:48:27.646144  103439 command_runner.go:130] > #
	I1002 20:48:27.646158  103439 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1002 20:48:27.646179  103439 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1002 20:48:27.646188  103439 command_runner.go:130] > [crio.image]
	I1002 20:48:27.646201  103439 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1002 20:48:27.646209  103439 command_runner.go:130] > # default_transport = "docker://"
	I1002 20:48:27.646217  103439 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1002 20:48:27.646225  103439 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:48:27.646229  103439 command_runner.go:130] > # global_auth_file = ""
	I1002 20:48:27.646236  103439 command_runner.go:130] > # The image used to instantiate infra containers.
	I1002 20:48:27.646241  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.646248  103439 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1002 20:48:27.646254  103439 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1002 20:48:27.646260  103439 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1002 20:48:27.646265  103439 command_runner.go:130] > # This option supports live configuration reload.
	I1002 20:48:27.646271  103439 command_runner.go:130] > # pause_image_auth_file = ""
	I1002 20:48:27.646276  103439 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1002 20:48:27.646281  103439 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1002 20:48:27.646289  103439 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1002 20:48:27.646295  103439 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1002 20:48:27.646301  103439 command_runner.go:130] > # pause_command = "/pause"
	I1002 20:48:27.646306  103439 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1002 20:48:27.646316  103439 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1002 20:48:27.646323  103439 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1002 20:48:27.646329  103439 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1002 20:48:27.646336  103439 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1002 20:48:27.646342  103439 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1002 20:48:27.646345  103439 command_runner.go:130] > # pinned_images = [
	I1002 20:48:27.646348  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646354  103439 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1002 20:48:27.646362  103439 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1002 20:48:27.646368  103439 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1002 20:48:27.646376  103439 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1002 20:48:27.646381  103439 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1002 20:48:27.646386  103439 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1002 20:48:27.646399  103439 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1002 20:48:27.646411  103439 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1002 20:48:27.646423  103439 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1002 20:48:27.646436  103439 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1002 20:48:27.646447  103439 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1002 20:48:27.646458  103439 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1002 20:48:27.646470  103439 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1002 20:48:27.646480  103439 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1002 20:48:27.646486  103439 command_runner.go:130] > # changing them here.
	I1002 20:48:27.646491  103439 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1002 20:48:27.646497  103439 command_runner.go:130] > # insecure_registries = [
	I1002 20:48:27.646500  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646507  103439 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1002 20:48:27.646516  103439 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1002 20:48:27.646522  103439 command_runner.go:130] > # image_volumes = "mkdir"
	I1002 20:48:27.646527  103439 command_runner.go:130] > # Temporary directory to use for storing big files
	I1002 20:48:27.646531  103439 command_runner.go:130] > # big_files_temporary_dir = ""
	I1002 20:48:27.646538  103439 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1002 20:48:27.646544  103439 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1002 20:48:27.646551  103439 command_runner.go:130] > # auto_reload_registries = false
	I1002 20:48:27.646557  103439 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1002 20:48:27.646571  103439 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1002 20:48:27.646579  103439 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1002 20:48:27.646583  103439 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1002 20:48:27.646590  103439 command_runner.go:130] > # The mode of short name resolution.
	I1002 20:48:27.646596  103439 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1002 20:48:27.646605  103439 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1002 20:48:27.646611  103439 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1002 20:48:27.646615  103439 command_runner.go:130] > # short_name_mode = "enforcing"
	I1002 20:48:27.646620  103439 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1002 20:48:27.646628  103439 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1002 20:48:27.646632  103439 command_runner.go:130] > # oci_artifact_mount_support = true
	I1002 20:48:27.646638  103439 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1002 20:48:27.646649  103439 command_runner.go:130] > # CNI plugins.
	I1002 20:48:27.646655  103439 command_runner.go:130] > [crio.network]
	I1002 20:48:27.646660  103439 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1002 20:48:27.646667  103439 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1002 20:48:27.646671  103439 command_runner.go:130] > # cni_default_network = ""
	I1002 20:48:27.646678  103439 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1002 20:48:27.646682  103439 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1002 20:48:27.646690  103439 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1002 20:48:27.646693  103439 command_runner.go:130] > # plugin_dirs = [
	I1002 20:48:27.646696  103439 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1002 20:48:27.646699  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646703  103439 command_runner.go:130] > # List of included pod metrics.
	I1002 20:48:27.646709  103439 command_runner.go:130] > # included_pod_metrics = [
	I1002 20:48:27.646711  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646716  103439 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1002 20:48:27.646722  103439 command_runner.go:130] > [crio.metrics]
	I1002 20:48:27.646726  103439 command_runner.go:130] > # Globally enable or disable metrics support.
	I1002 20:48:27.646732  103439 command_runner.go:130] > # enable_metrics = false
	I1002 20:48:27.646752  103439 command_runner.go:130] > # Specify enabled metrics collectors.
	I1002 20:48:27.646761  103439 command_runner.go:130] > # Per default all metrics are enabled.
	I1002 20:48:27.646767  103439 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1002 20:48:27.646775  103439 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1002 20:48:27.646783  103439 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1002 20:48:27.646787  103439 command_runner.go:130] > # metrics_collectors = [
	I1002 20:48:27.646793  103439 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1002 20:48:27.646797  103439 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1002 20:48:27.646800  103439 command_runner.go:130] > # 	"containers_oom_total",
	I1002 20:48:27.646804  103439 command_runner.go:130] > # 	"processes_defunct",
	I1002 20:48:27.646807  103439 command_runner.go:130] > # 	"operations_total",
	I1002 20:48:27.646811  103439 command_runner.go:130] > # 	"operations_latency_seconds",
	I1002 20:48:27.646815  103439 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1002 20:48:27.646818  103439 command_runner.go:130] > # 	"operations_errors_total",
	I1002 20:48:27.646822  103439 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1002 20:48:27.646831  103439 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1002 20:48:27.646835  103439 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1002 20:48:27.646839  103439 command_runner.go:130] > # 	"image_pulls_success_total",
	I1002 20:48:27.646842  103439 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1002 20:48:27.646846  103439 command_runner.go:130] > # 	"containers_oom_count_total",
	I1002 20:48:27.646850  103439 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1002 20:48:27.646853  103439 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1002 20:48:27.646857  103439 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1002 20:48:27.646860  103439 command_runner.go:130] > # ]
	I1002 20:48:27.646868  103439 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1002 20:48:27.646874  103439 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1002 20:48:27.646880  103439 command_runner.go:130] > # The port on which the metrics server will listen.
	I1002 20:48:27.646886  103439 command_runner.go:130] > # metrics_port = 9090
	I1002 20:48:27.646891  103439 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1002 20:48:27.646901  103439 command_runner.go:130] > # metrics_socket = ""
	I1002 20:48:27.646909  103439 command_runner.go:130] > # The certificate for the secure metrics server.
	I1002 20:48:27.646914  103439 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1002 20:48:27.646922  103439 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1002 20:48:27.646928  103439 command_runner.go:130] > # certificate on any modification event.
	I1002 20:48:27.646932  103439 command_runner.go:130] > # metrics_cert = ""
	I1002 20:48:27.646939  103439 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1002 20:48:27.646943  103439 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1002 20:48:27.646949  103439 command_runner.go:130] > # metrics_key = ""
	I1002 20:48:27.646954  103439 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1002 20:48:27.646960  103439 command_runner.go:130] > [crio.tracing]
	I1002 20:48:27.646966  103439 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1002 20:48:27.646971  103439 command_runner.go:130] > # enable_tracing = false
	I1002 20:48:27.646977  103439 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1002 20:48:27.646983  103439 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1002 20:48:27.646993  103439 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1002 20:48:27.646999  103439 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1002 20:48:27.647003  103439 command_runner.go:130] > # CRI-O NRI configuration.
	I1002 20:48:27.647009  103439 command_runner.go:130] > [crio.nri]
	I1002 20:48:27.647017  103439 command_runner.go:130] > # Globally enable or disable NRI.
	I1002 20:48:27.647023  103439 command_runner.go:130] > # enable_nri = true
	I1002 20:48:27.647032  103439 command_runner.go:130] > # NRI socket to listen on.
	I1002 20:48:27.647038  103439 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1002 20:48:27.647042  103439 command_runner.go:130] > # NRI plugin directory to use.
	I1002 20:48:27.647049  103439 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1002 20:48:27.647053  103439 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1002 20:48:27.647060  103439 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1002 20:48:27.647065  103439 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1002 20:48:27.647584  103439 command_runner.go:130] > # nri_disable_connections = false
	I1002 20:48:27.647654  103439 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1002 20:48:27.647663  103439 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1002 20:48:27.647672  103439 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1002 20:48:27.647686  103439 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1002 20:48:27.647693  103439 command_runner.go:130] > # NRI default validator configuration.
	I1002 20:48:27.647707  103439 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1002 20:48:27.647731  103439 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1002 20:48:27.647757  103439 command_runner.go:130] > # can be restricted/rejected:
	I1002 20:48:27.647770  103439 command_runner.go:130] > # - OCI hook injection
	I1002 20:48:27.647779  103439 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1002 20:48:27.647792  103439 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1002 20:48:27.647798  103439 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1002 20:48:27.647805  103439 command_runner.go:130] > # - adjustment of linux namespaces
	I1002 20:48:27.647819  103439 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1002 20:48:27.647828  103439 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1002 20:48:27.647837  103439 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1002 20:48:27.647841  103439 command_runner.go:130] > #
	I1002 20:48:27.647853  103439 command_runner.go:130] > # [crio.nri.default_validator]
	I1002 20:48:27.647859  103439 command_runner.go:130] > # nri_enable_default_validator = false
	I1002 20:48:27.647866  103439 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1002 20:48:27.647883  103439 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1002 20:48:27.647891  103439 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1002 20:48:27.647898  103439 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1002 20:48:27.647906  103439 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1002 20:48:27.647916  103439 command_runner.go:130] > # nri_validator_required_plugins = [
	I1002 20:48:27.647921  103439 command_runner.go:130] > # ]
	I1002 20:48:27.647929  103439 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1002 20:48:27.647939  103439 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1002 20:48:27.647949  103439 command_runner.go:130] > [crio.stats]
	I1002 20:48:27.647958  103439 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1002 20:48:27.647966  103439 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1002 20:48:27.647973  103439 command_runner.go:130] > # stats_collection_period = 0
	I1002 20:48:27.647994  103439 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1002 20:48:27.648004  103439 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1002 20:48:27.648009  103439 command_runner.go:130] > # collection_period = 0
	I1002 20:48:27.648051  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627189517Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1002 20:48:27.648070  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627217069Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1002 20:48:27.648087  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627236914Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1002 20:48:27.648106  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627255188Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1002 20:48:27.648122  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.62731995Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:48:27.648141  103439 command_runner.go:130] ! time="2025-10-02T20:48:27.627489035Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1002 20:48:27.648161  103439 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1002 20:48:27.648318  103439 cni.go:84] Creating CNI manager for ""
	I1002 20:48:27.648331  103439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:48:27.648354  103439 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:48:27.648401  103439 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-012915 NodeName:functional-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:48:27.648942  103439 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-012915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:48:27.649009  103439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:48:27.657181  103439 command_runner.go:130] > kubeadm
	I1002 20:48:27.657198  103439 command_runner.go:130] > kubectl
	I1002 20:48:27.657203  103439 command_runner.go:130] > kubelet
	I1002 20:48:27.657948  103439 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:48:27.658013  103439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:48:27.665603  103439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:48:27.678534  103439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:48:27.691111  103439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1002 20:48:27.703366  103439 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:48:27.707046  103439 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1002 20:48:27.707133  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:27.791376  103439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:48:27.804011  103439 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915 for IP: 192.168.49.2
	I1002 20:48:27.804040  103439 certs.go:195] generating shared ca certs ...
	I1002 20:48:27.804056  103439 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:27.804180  103439 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 20:48:27.804232  103439 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 20:48:27.804241  103439 certs.go:257] generating profile certs ...
	I1002 20:48:27.804334  103439 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key
	I1002 20:48:27.804375  103439 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645
	I1002 20:48:27.804412  103439 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key
	I1002 20:48:27.804424  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 20:48:27.804435  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 20:48:27.804453  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 20:48:27.804469  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 20:48:27.804481  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 20:48:27.804494  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 20:48:27.804506  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 20:48:27.804518  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 20:48:27.804560  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 20:48:27.804591  103439 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 20:48:27.804601  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:48:27.804623  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:48:27.804645  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:48:27.804666  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 20:48:27.804704  103439 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:48:27.804729  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 20:48:27.804763  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:27.804780  103439 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 20:48:27.805294  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:48:27.822974  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:48:27.840455  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:48:27.858368  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:48:27.877146  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:48:27.895282  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:48:27.912487  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:48:27.929452  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:48:27.947144  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 20:48:27.964177  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:48:27.981785  103439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 20:48:27.999006  103439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:48:28.011646  103439 ssh_runner.go:195] Run: openssl version
	I1002 20:48:28.017389  103439 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1002 20:48:28.017621  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 20:48:28.025902  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029403  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029446  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.029489  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 20:48:28.063085  103439 command_runner.go:130] > 3ec20f2e
	I1002 20:48:28.063182  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:48:28.071431  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:48:28.080075  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083770  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083829  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.083901  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:48:28.117894  103439 command_runner.go:130] > b5213941
	I1002 20:48:28.117982  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:48:28.126480  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 20:48:28.135075  103439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138711  103439 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138759  103439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.138809  103439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 20:48:28.172582  103439 command_runner.go:130] > 51391683
	I1002 20:48:28.172931  103439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 20:48:28.180914  103439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:48:28.184555  103439 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:48:28.184579  103439 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1002 20:48:28.184588  103439 command_runner.go:130] > Device: 8,1	Inode: 811435      Links: 1
	I1002 20:48:28.184598  103439 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1002 20:48:28.184608  103439 command_runner.go:130] > Access: 2025-10-02 20:44:21.070069799 +0000
	I1002 20:48:28.184616  103439 command_runner.go:130] > Modify: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184623  103439 command_runner.go:130] > Change: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184628  103439 command_runner.go:130] >  Birth: 2025-10-02 20:40:16.616531062 +0000
	I1002 20:48:28.184684  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:48:28.218476  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.218920  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:48:28.253813  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.254026  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:48:28.288477  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.288852  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:48:28.322969  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.323293  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:48:28.357073  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.357354  103439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:48:28.390854  103439 command_runner.go:130] > Certificate will not expire
	I1002 20:48:28.391133  103439 kubeadm.go:400] StartCluster: {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:48:28.391217  103439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:48:28.391280  103439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:48:28.420217  103439 cri.go:89] found id: ""
	I1002 20:48:28.420280  103439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:48:28.427672  103439 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1002 20:48:28.427700  103439 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1002 20:48:28.427710  103439 command_runner.go:130] > /var/lib/minikube/etcd:
	I1002 20:48:28.428396  103439 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:48:28.428413  103439 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:48:28.428455  103439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:48:28.435936  103439 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:48:28.436039  103439 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-012915" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.436106  103439 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "functional-012915" cluster setting kubeconfig missing "functional-012915" context setting]
	I1002 20:48:28.436458  103439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.437072  103439 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.437245  103439 kapi.go:59] client config for functional-012915: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:48:28.437717  103439 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 20:48:28.437732  103439 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 20:48:28.437753  103439 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 20:48:28.437760  103439 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 20:48:28.437765  103439 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 20:48:28.437782  103439 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 20:48:28.438160  103439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:48:28.446094  103439 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 20:48:28.446137  103439 kubeadm.go:601] duration metric: took 17.717766ms to restartPrimaryControlPlane
	I1002 20:48:28.446149  103439 kubeadm.go:402] duration metric: took 55.025148ms to StartCluster
	I1002 20:48:28.446168  103439 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.446285  103439 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.447035  103439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:48:28.447291  103439 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 20:48:28.447487  103439 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:48:28.447429  103439 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 20:48:28.447531  103439 addons.go:69] Setting storage-provisioner=true in profile "functional-012915"
	I1002 20:48:28.447538  103439 addons.go:69] Setting default-storageclass=true in profile "functional-012915"
	I1002 20:48:28.447553  103439 addons.go:238] Setting addon storage-provisioner=true in "functional-012915"
	I1002 20:48:28.447556  103439 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-012915"
	I1002 20:48:28.447587  103439 host.go:66] Checking if "functional-012915" exists ...
	I1002 20:48:28.447847  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.447963  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.456904  103439 out.go:179] * Verifying Kubernetes components...
	I1002 20:48:28.458283  103439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:48:28.468928  103439 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:48:28.469101  103439 kapi.go:59] client config for functional-012915: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 20:48:28.469369  103439 addons.go:238] Setting addon default-storageclass=true in "functional-012915"
	I1002 20:48:28.469428  103439 host.go:66] Checking if "functional-012915" exists ...
	I1002 20:48:28.469783  103439 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:48:28.469862  103439 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 20:48:28.471474  103439 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:28.471499  103439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 20:48:28.471557  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:28.496201  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:28.497174  103439 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.497196  103439 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 20:48:28.497262  103439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:48:28.518487  103439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:48:28.562123  103439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:48:28.575162  103439 node_ready.go:35] waiting up to 6m0s for node "functional-012915" to be "Ready" ...
	I1002 20:48:28.575316  103439 type.go:168] "Request Body" body=""
	I1002 20:48:28.575388  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:28.575672  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:28.608117  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:28.625656  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.661232  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.663490  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.663556  103439 retry.go:31] will retry after 361.771557ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.679351  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.679399  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.679416  103439 retry.go:31] will retry after 152.242547ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.831815  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:28.883542  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:28.883591  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:28.883623  103439 retry.go:31] will retry after 207.681653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.025956  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.075113  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.076262  103439 type.go:168] "Request Body" body=""
	I1002 20:48:29.076342  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:29.076623  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:29.077506  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.077533  103439 retry.go:31] will retry after 323.914971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.091861  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:29.140394  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.142831  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.142876  103439 retry.go:31] will retry after 594.351303ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.402253  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.454867  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.454924  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.454957  103439 retry.go:31] will retry after 314.476021ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.576263  103439 type.go:168] "Request Body" body=""
	I1002 20:48:29.576411  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:29.576803  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:29.738004  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:29.769756  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:29.788694  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.790987  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.791025  103439 retry.go:31] will retry after 1.197724944s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.822453  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:29.822502  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:29.822528  103439 retry.go:31] will retry after 662.931836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.075955  103439 type.go:168] "Request Body" body=""
	I1002 20:48:30.076032  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:30.076409  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:30.485957  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:30.538516  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:30.538557  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.538578  103439 retry.go:31] will retry after 1.629504367s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:30.575804  103439 type.go:168] "Request Body" body=""
	I1002 20:48:30.575880  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:30.576213  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:30.576271  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:30.989890  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:31.043558  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:31.043619  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.043637  103439 retry.go:31] will retry after 801.444903ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.075880  103439 type.go:168] "Request Body" body=""
	I1002 20:48:31.075960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:31.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:31.576114  103439 type.go:168] "Request Body" body=""
	I1002 20:48:31.576220  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:31.576603  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:31.845951  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:31.899339  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:31.899391  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:31.899410  103439 retry.go:31] will retry after 2.181457366s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.075827  103439 type.go:168] "Request Body" body=""
	I1002 20:48:32.075931  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:32.076334  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:32.168648  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:32.220495  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:32.220539  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.220557  103439 retry.go:31] will retry after 1.373851602s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:32.576076  103439 type.go:168] "Request Body" body=""
	I1002 20:48:32.576161  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:32.576533  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:32.576599  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:33.076393  103439 type.go:168] "Request Body" body=""
	I1002 20:48:33.076488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:33.076861  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:33.575655  103439 type.go:168] "Request Body" body=""
	I1002 20:48:33.575875  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:33.576337  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:33.595591  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:33.646012  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:33.648297  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:33.648332  103439 retry.go:31] will retry after 3.090030694s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.075896  103439 type.go:168] "Request Body" body=""
	I1002 20:48:34.075981  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:34.076263  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:34.081465  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:34.133647  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:34.133724  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.133770  103439 retry.go:31] will retry after 3.497111827s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:34.576313  103439 type.go:168] "Request Body" body=""
	I1002 20:48:34.576409  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:34.576832  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:34.576893  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:35.075636  103439 type.go:168] "Request Body" body=""
	I1002 20:48:35.075732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:35.076135  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:35.575728  103439 type.go:168] "Request Body" body=""
	I1002 20:48:35.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:35.576239  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.076110  103439 type.go:168] "Request Body" body=""
	I1002 20:48:36.076196  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:36.076574  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.575482  103439 type.go:168] "Request Body" body=""
	I1002 20:48:36.575578  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:36.575974  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:36.739297  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:36.791716  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:36.791786  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:36.791808  103439 retry.go:31] will retry after 4.619526112s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:37.076288  103439 type.go:168] "Request Body" body=""
	I1002 20:48:37.076368  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:37.076721  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:37.076814  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:37.576414  103439 type.go:168] "Request Body" body=""
	I1002 20:48:37.576492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:37.576867  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:37.632068  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:37.685537  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:37.685582  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:37.685612  103439 retry.go:31] will retry after 3.179037423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:38.076157  103439 type.go:168] "Request Body" body=""
	I1002 20:48:38.076230  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:38.076633  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:38.576327  103439 type.go:168] "Request Body" body=""
	I1002 20:48:38.576425  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:38.576797  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:39.075409  103439 type.go:168] "Request Body" body=""
	I1002 20:48:39.075492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:39.075858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:39.575455  103439 type.go:168] "Request Body" body=""
	I1002 20:48:39.575567  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:39.575934  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:39.576000  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:40.075790  103439 type.go:168] "Request Body" body=""
	I1002 20:48:40.075873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:40.076280  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:40.575900  103439 type.go:168] "Request Body" body=""
	I1002 20:48:40.575982  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:40.576339  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:40.865793  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:40.922102  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:40.922154  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:40.922173  103439 retry.go:31] will retry after 8.017978865s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.075452  103439 type.go:168] "Request Body" body=""
	I1002 20:48:41.075541  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:41.075959  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:41.412402  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:41.462892  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:41.465283  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.465317  103439 retry.go:31] will retry after 6.722422885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:41.575519  103439 type.go:168] "Request Body" body=""
	I1002 20:48:41.575606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:41.575978  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:41.576042  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:42.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:48:42.075773  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:42.076256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:42.575731  103439 type.go:168] "Request Body" body=""
	I1002 20:48:42.575835  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:42.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:43.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:48:43.076025  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:43.076442  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:43.576156  103439 type.go:168] "Request Body" body=""
	I1002 20:48:43.576250  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:43.576635  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:43.576711  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:44.076306  103439 type.go:168] "Request Body" body=""
	I1002 20:48:44.076398  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:44.076835  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:44.575484  103439 type.go:168] "Request Body" body=""
	I1002 20:48:44.575566  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:44.575930  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:45.075679  103439 type.go:168] "Request Body" body=""
	I1002 20:48:45.075780  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:45.076197  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:45.575843  103439 type.go:168] "Request Body" body=""
	I1002 20:48:45.575922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:45.576287  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:46.075882  103439 type.go:168] "Request Body" body=""
	I1002 20:48:46.075956  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:46.076307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:46.076367  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:46.576093  103439 type.go:168] "Request Body" body=""
	I1002 20:48:46.576194  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:46.576549  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:47.076247  103439 type.go:168] "Request Body" body=""
	I1002 20:48:47.076328  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:47.076667  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:47.576364  103439 type.go:168] "Request Body" body=""
	I1002 20:48:47.576474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:47.576869  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:48.075470  103439 type.go:168] "Request Body" body=""
	I1002 20:48:48.075556  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:48.075935  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:48.188198  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:48.240819  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:48.240876  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.240960  103439 retry.go:31] will retry after 5.203774684s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.575470  103439 type.go:168] "Request Body" body=""
	I1002 20:48:48.575548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:48.575916  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:48.575985  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:48.940390  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:48.992334  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:48.994935  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:48.994965  103439 retry.go:31] will retry after 7.700365391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:49.076327  103439 type.go:168] "Request Body" body=""
	I1002 20:48:49.076416  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:49.076830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:49.575454  103439 type.go:168] "Request Body" body=""
	I1002 20:48:49.575554  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:49.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:50.075711  103439 type.go:168] "Request Body" body=""
	I1002 20:48:50.075826  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:50.076249  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:50.575864  103439 type.go:168] "Request Body" body=""
	I1002 20:48:50.575961  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:50.576351  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:50.576415  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:51.076075  103439 type.go:168] "Request Body" body=""
	I1002 20:48:51.076176  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:51.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:51.575972  103439 type.go:168] "Request Body" body=""
	I1002 20:48:51.576054  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:51.576387  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:52.076055  103439 type.go:168] "Request Body" body=""
	I1002 20:48:52.076146  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:52.076526  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:52.576203  103439 type.go:168] "Request Body" body=""
	I1002 20:48:52.576289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:52.576688  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:52.576771  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:53.076363  103439 type.go:168] "Request Body" body=""
	I1002 20:48:53.076444  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:53.076831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:53.445247  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:48:53.496043  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:53.498518  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:53.498561  103439 retry.go:31] will retry after 18.668445084s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:53.575895  103439 type.go:168] "Request Body" body=""
	I1002 20:48:53.575974  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:53.576330  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:54.076074  103439 type.go:168] "Request Body" body=""
	I1002 20:48:54.076158  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:54.076568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:54.576230  103439 type.go:168] "Request Body" body=""
	I1002 20:48:54.576305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:54.576631  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:55.075724  103439 type.go:168] "Request Body" body=""
	I1002 20:48:55.075820  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:55.076207  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:55.076287  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:55.575835  103439 type.go:168] "Request Body" body=""
	I1002 20:48:55.575924  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:55.576280  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.075883  103439 type.go:168] "Request Body" body=""
	I1002 20:48:56.075963  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:56.076361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.576037  103439 type.go:168] "Request Body" body=""
	I1002 20:48:56.576120  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:56.576513  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:56.695837  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:48:56.749495  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:48:56.749534  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:56.749553  103439 retry.go:31] will retry after 17.757887541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:48:57.076066  103439 type.go:168] "Request Body" body=""
	I1002 20:48:57.076153  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:57.076611  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:57.076679  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:48:57.576325  103439 type.go:168] "Request Body" body=""
	I1002 20:48:57.576416  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:57.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:58.076237  103439 type.go:168] "Request Body" body=""
	I1002 20:48:58.076314  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:58.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:58.575412  103439 type.go:168] "Request Body" body=""
	I1002 20:48:58.575504  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:58.575865  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:59.075437  103439 type.go:168] "Request Body" body=""
	I1002 20:48:59.075528  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:59.075976  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:48:59.575438  103439 type.go:168] "Request Body" body=""
	I1002 20:48:59.575539  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:48:59.575952  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:48:59.576014  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:00.075849  103439 type.go:168] "Request Body" body=""
	I1002 20:49:00.075928  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:00.076266  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:00.575974  103439 type.go:168] "Request Body" body=""
	I1002 20:49:00.576072  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:00.576461  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:01.076180  103439 type.go:168] "Request Body" body=""
	I1002 20:49:01.076280  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:01.076643  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:01.576370  103439 type.go:168] "Request Body" body=""
	I1002 20:49:01.576466  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:01.576896  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:01.576970  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:02.075515  103439 type.go:168] "Request Body" body=""
	I1002 20:49:02.075606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:02.075985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:02.575600  103439 type.go:168] "Request Body" body=""
	I1002 20:49:02.575686  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:02.576112  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:03.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:49:03.075769  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:03.076121  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:03.575712  103439 type.go:168] "Request Body" body=""
	I1002 20:49:03.575846  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:03.576202  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:04.075891  103439 type.go:168] "Request Body" body=""
	I1002 20:49:04.075970  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:04.076322  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:04.076381  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:04.576087  103439 type.go:168] "Request Body" body=""
	I1002 20:49:04.576249  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:04.576616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:05.075403  103439 type.go:168] "Request Body" body=""
	I1002 20:49:05.075481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:05.075839  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:05.575464  103439 type.go:168] "Request Body" body=""
	I1002 20:49:05.575572  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:05.575972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:06.075594  103439 type.go:168] "Request Body" body=""
	I1002 20:49:06.075677  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:06.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:06.575658  103439 type.go:168] "Request Body" body=""
	I1002 20:49:06.575767  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:06.576141  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:06.576200  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:07.075781  103439 type.go:168] "Request Body" body=""
	I1002 20:49:07.075865  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:07.076245  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:07.575885  103439 type.go:168] "Request Body" body=""
	I1002 20:49:07.575974  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:07.576361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:08.075998  103439 type.go:168] "Request Body" body=""
	I1002 20:49:08.076084  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:08.076429  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:08.576307  103439 type.go:168] "Request Body" body=""
	I1002 20:49:08.576413  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:08.576814  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:08.576876  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:09.075362  103439 type.go:168] "Request Body" body=""
	I1002 20:49:09.075437  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:09.075799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:09.575387  103439 type.go:168] "Request Body" body=""
	I1002 20:49:09.575482  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:09.575850  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:10.075783  103439 type.go:168] "Request Body" body=""
	I1002 20:49:10.075869  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:10.076249  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:10.575831  103439 type.go:168] "Request Body" body=""
	I1002 20:49:10.575935  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:10.576353  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:11.076044  103439 type.go:168] "Request Body" body=""
	I1002 20:49:11.076133  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:11.076599  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:11.076668  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:11.576237  103439 type.go:168] "Request Body" body=""
	I1002 20:49:11.576331  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:11.576683  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:12.076335  103439 type.go:168] "Request Body" body=""
	I1002 20:49:12.076430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:12.076838  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:12.168044  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:12.220925  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:12.220980  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:12.221004  103439 retry.go:31] will retry after 18.69466529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:12.575446  103439 type.go:168] "Request Body" body=""
	I1002 20:49:12.575535  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:12.575932  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:13.075529  103439 type.go:168] "Request Body" body=""
	I1002 20:49:13.075604  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:13.075957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:13.575562  103439 type.go:168] "Request Body" body=""
	I1002 20:49:13.575652  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:13.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:13.576135  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:14.075639  103439 type.go:168] "Request Body" body=""
	I1002 20:49:14.075761  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:14.076134  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:14.507714  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:49:14.560377  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:14.560441  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:14.560472  103439 retry.go:31] will retry after 29.222161527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:14.575630  103439 type.go:168] "Request Body" body=""
	I1002 20:49:14.575695  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:14.575976  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:15.075906  103439 type.go:168] "Request Body" body=""
	I1002 20:49:15.075982  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:15.076361  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:15.575992  103439 type.go:168] "Request Body" body=""
	I1002 20:49:15.576071  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:15.576414  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:15.576474  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:16.076107  103439 type.go:168] "Request Body" body=""
	I1002 20:49:16.076212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:16.076649  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:16.576307  103439 type.go:168] "Request Body" body=""
	I1002 20:49:16.576391  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:16.576715  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:17.076322  103439 type.go:168] "Request Body" body=""
	I1002 20:49:17.076405  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:17.076824  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:17.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:49:17.575561  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:17.575924  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:18.076218  103439 type.go:168] "Request Body" body=""
	I1002 20:49:18.076306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:18.076654  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:18.076715  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:18.576306  103439 type.go:168] "Request Body" body=""
	I1002 20:49:18.576386  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:18.576768  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:19.075340  103439 type.go:168] "Request Body" body=""
	I1002 20:49:19.075428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:19.075806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:19.575441  103439 type.go:168] "Request Body" body=""
	I1002 20:49:19.575527  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:19.575944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:20.075821  103439 type.go:168] "Request Body" body=""
	I1002 20:49:20.075922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:20.076321  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:20.575880  103439 type.go:168] "Request Body" body=""
	I1002 20:49:20.575960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:20.576302  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:20.576377  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:21.075989  103439 type.go:168] "Request Body" body=""
	I1002 20:49:21.076074  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:21.076448  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:21.576110  103439 type.go:168] "Request Body" body=""
	I1002 20:49:21.576185  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:21.576542  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:22.076165  103439 type.go:168] "Request Body" body=""
	I1002 20:49:22.076244  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:22.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:22.576228  103439 type.go:168] "Request Body" body=""
	I1002 20:49:22.576309  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:22.576640  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:22.576699  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:23.076279  103439 type.go:168] "Request Body" body=""
	I1002 20:49:23.076364  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:23.076694  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:23.576332  103439 type.go:168] "Request Body" body=""
	I1002 20:49:23.576406  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:23.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:24.075380  103439 type.go:168] "Request Body" body=""
	I1002 20:49:24.075461  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:24.075821  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:24.575420  103439 type.go:168] "Request Body" body=""
	I1002 20:49:24.575507  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:24.575886  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:25.075625  103439 type.go:168] "Request Body" body=""
	I1002 20:49:25.075705  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:25.076135  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:25.076213  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:25.575710  103439 type.go:168] "Request Body" body=""
	I1002 20:49:25.575827  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:25.576189  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:26.075726  103439 type.go:168] "Request Body" body=""
	I1002 20:49:26.075816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:26.076175  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:26.575753  103439 type.go:168] "Request Body" body=""
	I1002 20:49:26.575829  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:26.576180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:27.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:49:27.075799  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:27.076197  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:27.076268  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:27.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:49:27.575897  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:27.576231  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:28.075845  103439 type.go:168] "Request Body" body=""
	I1002 20:49:28.075929  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:28.076311  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:28.576131  103439 type.go:168] "Request Body" body=""
	I1002 20:49:28.576205  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:28.576567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:29.076227  103439 type.go:168] "Request Body" body=""
	I1002 20:49:29.076317  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:29.076686  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:29.076777  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:29.576355  103439 type.go:168] "Request Body" body=""
	I1002 20:49:29.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:29.576786  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.075478  103439 type.go:168] "Request Body" body=""
	I1002 20:49:30.075569  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:30.075933  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.575478  103439 type.go:168] "Request Body" body=""
	I1002 20:49:30.575586  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:30.575938  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:30.916459  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:30.966432  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:30.968861  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:30.968901  103439 retry.go:31] will retry after 21.359119468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:31.076302  103439 type.go:168] "Request Body" body=""
	I1002 20:49:31.076392  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:31.076792  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:31.076872  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:31.575376  103439 type.go:168] "Request Body" body=""
	I1002 20:49:31.575450  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:31.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:32.075414  103439 type.go:168] "Request Body" body=""
	I1002 20:49:32.075517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:32.075902  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:32.575509  103439 type.go:168] "Request Body" body=""
	I1002 20:49:32.575602  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:32.575991  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:33.075769  103439 type.go:168] "Request Body" body=""
	I1002 20:49:33.075863  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:33.076201  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:33.576065  103439 type.go:168] "Request Body" body=""
	I1002 20:49:33.576159  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:33.576529  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:33.576605  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:34.076395  103439 type.go:168] "Request Body" body=""
	I1002 20:49:34.076474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:34.076849  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:34.575597  103439 type.go:168] "Request Body" body=""
	I1002 20:49:34.575671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:34.576060  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:35.075844  103439 type.go:168] "Request Body" body=""
	I1002 20:49:35.075929  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:35.076305  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:35.576145  103439 type.go:168] "Request Body" body=""
	I1002 20:49:35.576226  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:35.576568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:36.075334  103439 type.go:168] "Request Body" body=""
	I1002 20:49:36.075411  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:36.075806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:36.075863  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:36.575603  103439 type.go:168] "Request Body" body=""
	I1002 20:49:36.575675  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:36.576026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:37.075815  103439 type.go:168] "Request Body" body=""
	I1002 20:49:37.075895  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:37.076296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:37.576133  103439 type.go:168] "Request Body" body=""
	I1002 20:49:37.576211  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:37.576551  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:38.076393  103439 type.go:168] "Request Body" body=""
	I1002 20:49:38.076464  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:38.076847  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:38.076908  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:38.575667  103439 type.go:168] "Request Body" body=""
	I1002 20:49:38.575774  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:38.576122  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:39.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:49:39.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:39.076312  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:39.576198  103439 type.go:168] "Request Body" body=""
	I1002 20:49:39.576287  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:39.576659  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:40.075460  103439 type.go:168] "Request Body" body=""
	I1002 20:49:40.075544  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:40.075914  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:40.575679  103439 type.go:168] "Request Body" body=""
	I1002 20:49:40.575789  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:40.576134  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:40.576211  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:41.076023  103439 type.go:168] "Request Body" body=""
	I1002 20:49:41.076108  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:41.076444  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:41.576264  103439 type.go:168] "Request Body" body=""
	I1002 20:49:41.576340  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:41.576673  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:42.075461  103439 type.go:168] "Request Body" body=""
	I1002 20:49:42.075562  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:42.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:42.575679  103439 type.go:168] "Request Body" body=""
	I1002 20:49:42.575775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:42.576136  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:43.075963  103439 type.go:168] "Request Body" body=""
	I1002 20:49:43.076038  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:43.076375  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:43.076439  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:43.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:49:43.576333  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:43.576694  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:43.782991  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:49:43.835836  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:43.835901  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:43.835926  103439 retry.go:31] will retry after 22.850861202s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 20:49:44.076251  103439 type.go:168] "Request Body" body=""
	I1002 20:49:44.076330  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:44.076662  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:44.576378  103439 type.go:168] "Request Body" body=""
	I1002 20:49:44.576459  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:44.576851  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:45.075622  103439 type.go:168] "Request Body" body=""
	I1002 20:49:45.075712  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:45.076088  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:45.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:49:45.575872  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:45.576194  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:45.576263  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:46.075799  103439 type.go:168] "Request Body" body=""
	I1002 20:49:46.075878  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:46.076248  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:46.576106  103439 type.go:168] "Request Body" body=""
	I1002 20:49:46.576212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:46.576565  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:47.075364  103439 type.go:168] "Request Body" body=""
	I1002 20:49:47.075444  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:47.075796  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:47.575534  103439 type.go:168] "Request Body" body=""
	I1002 20:49:47.575641  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:47.576000  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:48.075765  103439 type.go:168] "Request Body" body=""
	I1002 20:49:48.075841  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:48.076173  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:48.076233  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:48.576031  103439 type.go:168] "Request Body" body=""
	I1002 20:49:48.576136  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:48.576523  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:49.076388  103439 type.go:168] "Request Body" body=""
	I1002 20:49:49.076470  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:49.076836  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:49.575635  103439 type.go:168] "Request Body" body=""
	I1002 20:49:49.575728  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:49.576118  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:50.075933  103439 type.go:168] "Request Body" body=""
	I1002 20:49:50.076012  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:50.076363  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:50.076472  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:50.576327  103439 type.go:168] "Request Body" body=""
	I1002 20:49:50.576425  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:50.576803  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:51.075548  103439 type.go:168] "Request Body" body=""
	I1002 20:49:51.075627  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:51.075982  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:51.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:49:51.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:51.576150  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:52.075977  103439 type.go:168] "Request Body" body=""
	I1002 20:49:52.076055  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:52.076435  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:52.076515  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:52.328832  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 20:49:52.382480  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:52.382546  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:49:52.382704  103439 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:49:52.575971  103439 type.go:168] "Request Body" body=""
	I1002 20:49:52.576051  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:52.576411  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:53.076086  103439 type.go:168] "Request Body" body=""
	I1002 20:49:53.076192  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:53.076567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:53.576218  103439 type.go:168] "Request Body" body=""
	I1002 20:49:53.576298  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:53.576641  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:54.076333  103439 type.go:168] "Request Body" body=""
	I1002 20:49:54.076427  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:54.076837  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:54.076901  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:54.575348  103439 type.go:168] "Request Body" body=""
	I1002 20:49:54.575429  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:54.575793  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:55.075831  103439 type.go:168] "Request Body" body=""
	I1002 20:49:55.075927  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:55.076284  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:55.575878  103439 type.go:168] "Request Body" body=""
	I1002 20:49:55.575952  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:55.576307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:56.075954  103439 type.go:168] "Request Body" body=""
	I1002 20:49:56.076056  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:56.076429  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:56.576071  103439 type.go:168] "Request Body" body=""
	I1002 20:49:56.576174  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:56.576511  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:56.576569  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:57.076167  103439 type.go:168] "Request Body" body=""
	I1002 20:49:57.076292  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:57.076654  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:57.576317  103439 type.go:168] "Request Body" body=""
	I1002 20:49:57.576399  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:57.576791  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:58.075329  103439 type.go:168] "Request Body" body=""
	I1002 20:49:58.075426  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:58.075862  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:58.575784  103439 type.go:168] "Request Body" body=""
	I1002 20:49:58.575888  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:58.576288  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:49:59.075625  103439 type.go:168] "Request Body" body=""
	I1002 20:49:59.075696  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:59.076065  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:49:59.076136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:49:59.575793  103439 type.go:168] "Request Body" body=""
	I1002 20:49:59.575892  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:49:59.576323  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:00.076176  103439 type.go:168] "Request Body" body=""
	I1002 20:50:00.076256  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:00.076616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:00.575379  103439 type.go:168] "Request Body" body=""
	I1002 20:50:00.575456  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:00.575877  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:01.075664  103439 type.go:168] "Request Body" body=""
	I1002 20:50:01.075760  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:01.076169  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:01.076232  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:01.576062  103439 type.go:168] "Request Body" body=""
	I1002 20:50:01.576155  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:01.576520  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:02.076405  103439 type.go:168] "Request Body" body=""
	I1002 20:50:02.076489  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:02.076943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:02.575716  103439 type.go:168] "Request Body" body=""
	I1002 20:50:02.575817  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:02.576177  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:03.076017  103439 type.go:168] "Request Body" body=""
	I1002 20:50:03.076108  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:03.076545  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:03.076613  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:03.575378  103439 type.go:168] "Request Body" body=""
	I1002 20:50:03.575465  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:03.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:04.075550  103439 type.go:168] "Request Body" body=""
	I1002 20:50:04.075623  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:04.076010  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:04.575808  103439 type.go:168] "Request Body" body=""
	I1002 20:50:04.575945  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:04.576301  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:05.076207  103439 type.go:168] "Request Body" body=""
	I1002 20:50:05.076281  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:05.076634  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:05.076700  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:05.575445  103439 type.go:168] "Request Body" body=""
	I1002 20:50:05.575527  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:05.575953  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.075700  103439 type.go:168] "Request Body" body=""
	I1002 20:50:06.075799  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:06.076172  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.575978  103439 type.go:168] "Request Body" body=""
	I1002 20:50:06.576053  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:06.576423  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:06.687689  103439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 20:50:06.737429  103439 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:50:06.739791  103439 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 20:50:06.739905  103439 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 20:50:06.742850  103439 out.go:179] * Enabled addons: 
	I1002 20:50:06.744531  103439 addons.go:514] duration metric: took 1m38.297120179s for enable addons: enabled=[]
	I1002 20:50:07.076348  103439 type.go:168] "Request Body" body=""
	I1002 20:50:07.076424  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:07.076810  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:07.076887  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:07.575585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:07.575664  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:07.576013  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:08.075862  103439 type.go:168] "Request Body" body=""
	I1002 20:50:08.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:08.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:08.576074  103439 type.go:168] "Request Body" body=""
	I1002 20:50:08.576184  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:08.576885  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:09.075637  103439 type.go:168] "Request Body" body=""
	I1002 20:50:09.075726  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:09.076126  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:09.575856  103439 type.go:168] "Request Body" body=""
	I1002 20:50:09.575938  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:09.576289  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:09.576365  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:10.076213  103439 type.go:168] "Request Body" body=""
	I1002 20:50:10.076289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:10.076668  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:10.575384  103439 type.go:168] "Request Body" body=""
	I1002 20:50:10.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:10.575843  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:11.075634  103439 type.go:168] "Request Body" body=""
	I1002 20:50:11.075712  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:11.076109  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:11.575835  103439 type.go:168] "Request Body" body=""
	I1002 20:50:11.575921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:11.576276  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:12.076113  103439 type.go:168] "Request Body" body=""
	I1002 20:50:12.076186  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:12.076607  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:12.076677  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:12.575967  103439 type.go:168] "Request Body" body=""
	I1002 20:50:12.576054  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:12.576464  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:13.076341  103439 type.go:168] "Request Body" body=""
	I1002 20:50:13.076412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:13.076780  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:13.575533  103439 type.go:168] "Request Body" body=""
	I1002 20:50:13.575606  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:13.576033  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:14.075814  103439 type.go:168] "Request Body" body=""
	I1002 20:50:14.075900  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:14.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:14.576194  103439 type.go:168] "Request Body" body=""
	I1002 20:50:14.576290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:14.576629  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:14.576695  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:15.075361  103439 type.go:168] "Request Body" body=""
	I1002 20:50:15.075442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:15.075840  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:15.575616  103439 type.go:168] "Request Body" body=""
	I1002 20:50:15.575700  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:15.576070  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:16.075838  103439 type.go:168] "Request Body" body=""
	I1002 20:50:16.075936  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:16.076365  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:16.576255  103439 type.go:168] "Request Body" body=""
	I1002 20:50:16.576335  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:16.576673  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:16.576732  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:17.075466  103439 type.go:168] "Request Body" body=""
	I1002 20:50:17.075545  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:17.075956  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:17.575727  103439 type.go:168] "Request Body" body=""
	I1002 20:50:17.575832  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:17.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:18.076032  103439 type.go:168] "Request Body" body=""
	I1002 20:50:18.076123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:18.076487  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:18.576201  103439 type.go:168] "Request Body" body=""
	I1002 20:50:18.576280  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:18.576630  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:19.075359  103439 type.go:168] "Request Body" body=""
	I1002 20:50:19.075436  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:19.075879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:19.075940  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:19.575662  103439 type.go:168] "Request Body" body=""
	I1002 20:50:19.575765  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:19.576112  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:20.075942  103439 type.go:168] "Request Body" body=""
	I1002 20:50:20.076022  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:20.076365  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:20.576167  103439 type.go:168] "Request Body" body=""
	I1002 20:50:20.576281  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:20.576638  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:21.075449  103439 type.go:168] "Request Body" body=""
	I1002 20:50:21.075533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:21.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:21.076012  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:21.575710  103439 type.go:168] "Request Body" body=""
	I1002 20:50:21.575816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:21.576163  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:22.076027  103439 type.go:168] "Request Body" body=""
	I1002 20:50:22.076112  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:22.076486  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:22.576328  103439 type.go:168] "Request Body" body=""
	I1002 20:50:22.576406  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:22.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:23.075575  103439 type.go:168] "Request Body" body=""
	I1002 20:50:23.075653  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:23.076015  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:23.076102  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:23.575919  103439 type.go:168] "Request Body" body=""
	I1002 20:50:23.576001  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:23.576441  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:24.076301  103439 type.go:168] "Request Body" body=""
	I1002 20:50:24.076385  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:24.076732  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:24.575497  103439 type.go:168] "Request Body" body=""
	I1002 20:50:24.575575  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:24.575977  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:25.075906  103439 type.go:168] "Request Body" body=""
	I1002 20:50:25.076002  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:25.076372  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:25.076430  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:25.575772  103439 type.go:168] "Request Body" body=""
	I1002 20:50:25.575847  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:25.576205  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:26.075989  103439 type.go:168] "Request Body" body=""
	I1002 20:50:26.076058  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:26.076440  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:26.576301  103439 type.go:168] "Request Body" body=""
	I1002 20:50:26.576389  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:26.576734  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:27.075548  103439 type.go:168] "Request Body" body=""
	I1002 20:50:27.075630  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:27.076087  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:27.575871  103439 type.go:168] "Request Body" body=""
	I1002 20:50:27.575960  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:27.576295  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:27.576366  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:28.075983  103439 type.go:168] "Request Body" body=""
	I1002 20:50:28.076395  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:28.076839  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:28.575729  103439 type.go:168] "Request Body" body=""
	I1002 20:50:28.575838  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:28.576242  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:29.075826  103439 type.go:168] "Request Body" body=""
	I1002 20:50:29.075899  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:29.076269  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:29.576058  103439 type.go:168] "Request Body" body=""
	I1002 20:50:29.576161  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:29.576557  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:29.576620  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:30.075394  103439 type.go:168] "Request Body" body=""
	I1002 20:50:30.075476  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:30.075848  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:30.575440  103439 type.go:168] "Request Body" body=""
	I1002 20:50:30.575513  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:30.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:31.075504  103439 type.go:168] "Request Body" body=""
	I1002 20:50:31.075583  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:31.075947  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:31.575533  103439 type.go:168] "Request Body" body=""
	I1002 20:50:31.575614  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:31.576035  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:32.075585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:32.075666  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:32.076026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:32.076094  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:32.575632  103439 type.go:168] "Request Body" body=""
	I1002 20:50:32.575709  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:32.576117  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:33.075652  103439 type.go:168] "Request Body" body=""
	I1002 20:50:33.075731  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:33.076100  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:33.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:50:33.575758  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:33.576149  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:34.075715  103439 type.go:168] "Request Body" body=""
	I1002 20:50:34.075810  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:34.076153  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:34.076216  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:34.575779  103439 type.go:168] "Request Body" body=""
	I1002 20:50:34.575858  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:34.576247  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:35.076148  103439 type.go:168] "Request Body" body=""
	I1002 20:50:35.076233  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:35.076598  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:35.576262  103439 type.go:168] "Request Body" body=""
	I1002 20:50:35.576347  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:35.576802  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:36.075374  103439 type.go:168] "Request Body" body=""
	I1002 20:50:36.075454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:36.075824  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:36.575422  103439 type.go:168] "Request Body" body=""
	I1002 20:50:36.575496  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:36.575848  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:36.575906  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:37.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:50:37.075521  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:37.075904  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:37.575460  103439 type.go:168] "Request Body" body=""
	I1002 20:50:37.575565  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:37.575952  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:38.075497  103439 type.go:168] "Request Body" body=""
	I1002 20:50:38.075579  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:38.075949  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:38.575843  103439 type.go:168] "Request Body" body=""
	I1002 20:50:38.575923  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:38.576292  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:38.576357  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:39.075970  103439 type.go:168] "Request Body" body=""
	I1002 20:50:39.076045  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:39.076459  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:39.576183  103439 type.go:168] "Request Body" body=""
	I1002 20:50:39.576276  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:39.576637  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:40.075394  103439 type.go:168] "Request Body" body=""
	I1002 20:50:40.075469  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:40.075856  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:40.575390  103439 type.go:168] "Request Body" body=""
	I1002 20:50:40.575465  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:40.575823  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:41.076191  103439 type.go:168] "Request Body" body=""
	I1002 20:50:41.076274  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:41.076628  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:41.076694  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:41.576291  103439 type.go:168] "Request Body" body=""
	I1002 20:50:41.576370  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:41.576770  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:42.076380  103439 type.go:168] "Request Body" body=""
	I1002 20:50:42.076481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:42.076834  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:42.575420  103439 type.go:168] "Request Body" body=""
	I1002 20:50:42.575496  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:42.575951  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:43.075513  103439 type.go:168] "Request Body" body=""
	I1002 20:50:43.075604  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:43.075967  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:43.575585  103439 type.go:168] "Request Body" body=""
	I1002 20:50:43.575664  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:43.576070  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:43.576146  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:44.075681  103439 type.go:168] "Request Body" body=""
	I1002 20:50:44.075873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:44.076261  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:44.575868  103439 type.go:168] "Request Body" body=""
	I1002 20:50:44.575964  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:44.576327  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:45.076248  103439 type.go:168] "Request Body" body=""
	I1002 20:50:45.076357  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:45.076714  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:45.576035  103439 type.go:168] "Request Body" body=""
	I1002 20:50:45.576124  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:45.576501  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:45.576565  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:46.076153  103439 type.go:168] "Request Body" body=""
	I1002 20:50:46.076231  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:46.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:46.576261  103439 type.go:168] "Request Body" body=""
	I1002 20:50:46.576334  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:46.576706  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:47.076362  103439 type.go:168] "Request Body" body=""
	I1002 20:50:47.076446  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:47.076819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:47.575401  103439 type.go:168] "Request Body" body=""
	I1002 20:50:47.575474  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:47.575854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:48.075429  103439 type.go:168] "Request Body" body=""
	I1002 20:50:48.075510  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:48.075856  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:48.075914  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:48.575411  103439 type.go:168] "Request Body" body=""
	I1002 20:50:48.575495  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:48.575887  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:49.075463  103439 type.go:168] "Request Body" body=""
	I1002 20:50:49.075543  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:49.075937  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:49.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:50:49.575579  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:49.575950  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:50.075789  103439 type.go:168] "Request Body" body=""
	I1002 20:50:50.075872  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:50.076231  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:50.076332  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:50.575815  103439 type.go:168] "Request Body" body=""
	I1002 20:50:50.575914  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:50.576296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:51.075877  103439 type.go:168] "Request Body" body=""
	I1002 20:50:51.075952  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:51.076337  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:51.576100  103439 type.go:168] "Request Body" body=""
	I1002 20:50:51.576202  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:51.576539  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:52.076187  103439 type.go:168] "Request Body" body=""
	I1002 20:50:52.076262  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:52.076592  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:52.076677  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:52.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:50:52.576403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:52.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:53.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:50:53.075460  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:53.075819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:53.575411  103439 type.go:168] "Request Body" body=""
	I1002 20:50:53.575520  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:53.575927  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:54.075511  103439 type.go:168] "Request Body" body=""
	I1002 20:50:54.075600  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:54.075971  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:54.575550  103439 type.go:168] "Request Body" body=""
	I1002 20:50:54.575643  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:54.576052  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:54.576136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:55.075833  103439 type.go:168] "Request Body" body=""
	I1002 20:50:55.075908  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:55.076313  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:55.575945  103439 type.go:168] "Request Body" body=""
	I1002 20:50:55.576033  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:55.576428  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:56.076124  103439 type.go:168] "Request Body" body=""
	I1002 20:50:56.076205  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:56.076588  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:56.576221  103439 type.go:168] "Request Body" body=""
	I1002 20:50:56.576325  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:56.576662  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:56.576724  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:57.076306  103439 type.go:168] "Request Body" body=""
	I1002 20:50:57.076386  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:57.076786  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:57.575325  103439 type.go:168] "Request Body" body=""
	I1002 20:50:57.575412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:57.575787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:58.076352  103439 type.go:168] "Request Body" body=""
	I1002 20:50:58.076422  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:58.076854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:58.575806  103439 type.go:168] "Request Body" body=""
	I1002 20:50:58.575901  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:58.576260  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:50:59.075853  103439 type.go:168] "Request Body" body=""
	I1002 20:50:59.075934  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:59.076321  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:50:59.076383  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:50:59.575967  103439 type.go:168] "Request Body" body=""
	I1002 20:50:59.576070  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:50:59.576437  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:00.076247  103439 type.go:168] "Request Body" body=""
	I1002 20:51:00.076327  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:00.076671  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:00.576348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:00.576435  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:00.576826  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:01.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:51:01.075456  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:01.075840  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:01.575383  103439 type.go:168] "Request Body" body=""
	I1002 20:51:01.575471  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:01.575834  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:01.575909  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:02.075405  103439 type.go:168] "Request Body" body=""
	I1002 20:51:02.075486  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:02.075854  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:02.575427  103439 type.go:168] "Request Body" body=""
	I1002 20:51:02.575517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:02.575932  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:03.075458  103439 type.go:168] "Request Body" body=""
	I1002 20:51:03.075534  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:03.075891  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:03.576314  103439 type.go:168] "Request Body" body=""
	I1002 20:51:03.576387  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:03.576727  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:03.576806  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:04.076341  103439 type.go:168] "Request Body" body=""
	I1002 20:51:04.076414  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:04.076789  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:04.575407  103439 type.go:168] "Request Body" body=""
	I1002 20:51:04.575488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:04.575830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:05.075787  103439 type.go:168] "Request Body" body=""
	I1002 20:51:05.075860  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:05.076258  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:05.575847  103439 type.go:168] "Request Body" body=""
	I1002 20:51:05.575921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:05.576283  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:06.075890  103439 type.go:168] "Request Body" body=""
	I1002 20:51:06.075964  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:06.076395  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:06.076456  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:06.575993  103439 type.go:168] "Request Body" body=""
	I1002 20:51:06.576075  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:06.576412  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:07.076071  103439 type.go:168] "Request Body" body=""
	I1002 20:51:07.076154  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:07.076593  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:07.576229  103439 type.go:168] "Request Body" body=""
	I1002 20:51:07.576309  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:07.576657  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:08.076385  103439 type.go:168] "Request Body" body=""
	I1002 20:51:08.076464  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:08.076893  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:08.076954  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:08.575699  103439 type.go:168] "Request Body" body=""
	I1002 20:51:08.575787  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:08.576128  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:09.075675  103439 type.go:168] "Request Body" body=""
	I1002 20:51:09.075764  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:09.076126  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:09.576325  103439 type.go:168] "Request Body" body=""
	I1002 20:51:09.576432  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:09.576808  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:10.075645  103439 type.go:168] "Request Body" body=""
	I1002 20:51:10.075730  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:10.076142  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:10.575721  103439 type.go:168] "Request Body" body=""
	I1002 20:51:10.575820  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:10.576241  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:10.576304  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:11.075870  103439 type.go:168] "Request Body" body=""
	I1002 20:51:11.075955  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:11.076373  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:11.576041  103439 type.go:168] "Request Body" body=""
	I1002 20:51:11.576140  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:11.576505  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:12.076251  103439 type.go:168] "Request Body" body=""
	I1002 20:51:12.076345  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:12.076705  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:12.576352  103439 type.go:168] "Request Body" body=""
	I1002 20:51:12.576428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:12.576813  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:12.576892  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:13.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:51:13.075526  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:13.075917  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:13.575550  103439 type.go:168] "Request Body" body=""
	I1002 20:51:13.575640  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:13.576048  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:14.075644  103439 type.go:168] "Request Body" body=""
	I1002 20:51:14.075715  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:14.076108  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:14.575664  103439 type.go:168] "Request Body" body=""
	I1002 20:51:14.575795  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:14.576210  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:15.076065  103439 type.go:168] "Request Body" body=""
	I1002 20:51:15.076151  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:15.076548  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:15.076609  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:15.576209  103439 type.go:168] "Request Body" body=""
	I1002 20:51:15.576290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:15.576658  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:16.076387  103439 type.go:168] "Request Body" body=""
	I1002 20:51:16.076472  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:16.076818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:16.575432  103439 type.go:168] "Request Body" body=""
	I1002 20:51:16.575509  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:16.575925  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:17.075499  103439 type.go:168] "Request Body" body=""
	I1002 20:51:17.075588  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:17.075953  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:17.575636  103439 type.go:168] "Request Body" body=""
	I1002 20:51:17.575717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:17.576139  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:17.576206  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:18.075726  103439 type.go:168] "Request Body" body=""
	I1002 20:51:18.075840  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:18.076170  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:18.576043  103439 type.go:168] "Request Body" body=""
	I1002 20:51:18.576134  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:18.576500  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:19.076156  103439 type.go:168] "Request Body" body=""
	I1002 20:51:19.076230  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:19.076608  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:19.576287  103439 type.go:168] "Request Body" body=""
	I1002 20:51:19.576370  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:19.576719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:19.576823  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:20.075605  103439 type.go:168] "Request Body" body=""
	I1002 20:51:20.075689  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:20.076064  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:20.575671  103439 type.go:168] "Request Body" body=""
	I1002 20:51:20.575771  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:20.576160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:21.075760  103439 type.go:168] "Request Body" body=""
	I1002 20:51:21.075844  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:21.076251  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:21.575856  103439 type.go:168] "Request Body" body=""
	I1002 20:51:21.575946  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:21.576277  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:22.075938  103439 type.go:168] "Request Body" body=""
	I1002 20:51:22.076020  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:22.076385  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:22.076458  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:22.576058  103439 type.go:168] "Request Body" body=""
	I1002 20:51:22.576150  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:22.576496  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:23.076164  103439 type.go:168] "Request Body" body=""
	I1002 20:51:23.076256  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:23.076616  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:23.576268  103439 type.go:168] "Request Body" body=""
	I1002 20:51:23.576350  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:23.576704  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:24.076361  103439 type.go:168] "Request Body" body=""
	I1002 20:51:24.076448  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:24.076818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:24.076882  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:24.575376  103439 type.go:168] "Request Body" body=""
	I1002 20:51:24.575452  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:24.575842  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:25.075817  103439 type.go:168] "Request Body" body=""
	I1002 20:51:25.075926  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:25.076324  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:25.575895  103439 type.go:168] "Request Body" body=""
	I1002 20:51:25.575977  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:25.576326  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:26.076018  103439 type.go:168] "Request Body" body=""
	I1002 20:51:26.076112  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:26.076484  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:26.576139  103439 type.go:168] "Request Body" body=""
	I1002 20:51:26.576216  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:26.576529  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:26.576601  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:27.076219  103439 type.go:168] "Request Body" body=""
	I1002 20:51:27.076333  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:27.076702  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:27.576348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:27.576421  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:27.576775  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:28.075392  103439 type.go:168] "Request Body" body=""
	I1002 20:51:28.075490  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:28.075928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:28.575733  103439 type.go:168] "Request Body" body=""
	I1002 20:51:28.575828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:28.576180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:29.075796  103439 type.go:168] "Request Body" body=""
	I1002 20:51:29.075881  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:29.076267  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:29.076325  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:29.575904  103439 type.go:168] "Request Body" body=""
	I1002 20:51:29.575995  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:29.576458  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:30.076348  103439 type.go:168] "Request Body" body=""
	I1002 20:51:30.076430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:30.076826  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:30.575400  103439 type.go:168] "Request Body" body=""
	I1002 20:51:30.575481  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:30.575844  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:31.075477  103439 type.go:168] "Request Body" body=""
	I1002 20:51:31.075558  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:31.076018  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:31.575552  103439 type.go:168] "Request Body" body=""
	I1002 20:51:31.575626  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:31.575957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:31.576019  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:32.075567  103439 type.go:168] "Request Body" body=""
	I1002 20:51:32.075648  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:32.076000  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:32.575617  103439 type.go:168] "Request Body" body=""
	I1002 20:51:32.575691  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:32.576091  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:33.075777  103439 type.go:168] "Request Body" body=""
	I1002 20:51:33.075867  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:33.076312  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:33.575892  103439 type.go:168] "Request Body" body=""
	I1002 20:51:33.575966  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:33.576360  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:33.576436  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:34.075990  103439 type.go:168] "Request Body" body=""
	I1002 20:51:34.076064  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:34.076423  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:34.576156  103439 type.go:168] "Request Body" body=""
	I1002 20:51:34.576242  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:34.576614  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:35.075451  103439 type.go:168] "Request Body" body=""
	I1002 20:51:35.075544  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:35.075944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:35.575553  103439 type.go:168] "Request Body" body=""
	I1002 20:51:35.575632  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:35.575984  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:36.075611  103439 type.go:168] "Request Body" body=""
	I1002 20:51:36.075690  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:36.076097  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:36.076170  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:36.575781  103439 type.go:168] "Request Body" body=""
	I1002 20:51:36.575857  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:36.576209  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:37.075787  103439 type.go:168] "Request Body" body=""
	I1002 20:51:37.075868  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:37.076233  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:37.575919  103439 type.go:168] "Request Body" body=""
	I1002 20:51:37.576016  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:37.576386  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:38.076037  103439 type.go:168] "Request Body" body=""
	I1002 20:51:38.076126  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:38.076506  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:38.076573  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:38.576216  103439 type.go:168] "Request Body" body=""
	I1002 20:51:38.576315  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:38.576715  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:39.076566  103439 type.go:168] "Request Body" body=""
	I1002 20:51:39.076671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:39.077118  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:39.575701  103439 type.go:168] "Request Body" body=""
	I1002 20:51:39.575832  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:39.576184  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:40.076137  103439 type.go:168] "Request Body" body=""
	I1002 20:51:40.076214  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:40.076550  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:40.076615  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:40.576291  103439 type.go:168] "Request Body" body=""
	I1002 20:51:40.576390  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:40.576794  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:41.075322  103439 type.go:168] "Request Body" body=""
	I1002 20:51:41.075403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:41.075780  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:41.575391  103439 type.go:168] "Request Body" body=""
	I1002 20:51:41.575470  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:41.575870  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:42.075445  103439 type.go:168] "Request Body" body=""
	I1002 20:51:42.075545  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:42.075943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:42.575565  103439 type.go:168] "Request Body" body=""
	I1002 20:51:42.575660  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:42.576053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:42.576127  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:43.075648  103439 type.go:168] "Request Body" body=""
	I1002 20:51:43.075718  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:43.076099  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:43.575699  103439 type.go:168] "Request Body" body=""
	I1002 20:51:43.575814  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:43.576217  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:44.075869  103439 type.go:168] "Request Body" body=""
	I1002 20:51:44.075942  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:44.076297  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:44.575859  103439 type.go:168] "Request Body" body=""
	I1002 20:51:44.575949  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:44.576319  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:44.576388  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:45.076331  103439 type.go:168] "Request Body" body=""
	I1002 20:51:45.076413  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:45.076728  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:45.575369  103439 type.go:168] "Request Body" body=""
	I1002 20:51:45.575463  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:45.575833  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:46.075482  103439 type.go:168] "Request Body" body=""
	I1002 20:51:46.075561  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:46.075954  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:46.575542  103439 type.go:168] "Request Body" body=""
	I1002 20:51:46.575624  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:46.575972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:47.075530  103439 type.go:168] "Request Body" body=""
	I1002 20:51:47.075605  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:47.076010  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:47.076101  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:47.575610  103439 type.go:168] "Request Body" body=""
	I1002 20:51:47.575685  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:47.576069  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:48.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:51:48.075809  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:48.076160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:48.576035  103439 type.go:168] "Request Body" body=""
	I1002 20:51:48.576123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:48.576499  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:49.076190  103439 type.go:168] "Request Body" body=""
	I1002 20:51:49.076263  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:49.076621  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:49.076681  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:49.576270  103439 type.go:168] "Request Body" body=""
	I1002 20:51:49.576351  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:49.576787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:50.075539  103439 type.go:168] "Request Body" body=""
	I1002 20:51:50.075624  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:50.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:50.575631  103439 type.go:168] "Request Body" body=""
	I1002 20:51:50.575707  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:50.576114  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:51.075711  103439 type.go:168] "Request Body" body=""
	I1002 20:51:51.075818  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:51.076157  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:51.575814  103439 type.go:168] "Request Body" body=""
	I1002 20:51:51.575890  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:51.576235  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:51.576316  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:52.075820  103439 type.go:168] "Request Body" body=""
	I1002 20:51:52.075911  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:52.076272  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:52.575858  103439 type.go:168] "Request Body" body=""
	I1002 20:51:52.575932  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:52.576284  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:53.075878  103439 type.go:168] "Request Body" body=""
	I1002 20:51:53.075963  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:53.076342  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:53.576038  103439 type.go:168] "Request Body" body=""
	I1002 20:51:53.576123  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:53.576491  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:53.576559  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:54.076212  103439 type.go:168] "Request Body" body=""
	I1002 20:51:54.076289  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:54.076627  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:54.576310  103439 type.go:168] "Request Body" body=""
	I1002 20:51:54.576389  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:54.576719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:55.075503  103439 type.go:168] "Request Body" body=""
	I1002 20:51:55.075581  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:55.075972  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:55.575557  103439 type.go:168] "Request Body" body=""
	I1002 20:51:55.575642  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:55.576018  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:56.075601  103439 type.go:168] "Request Body" body=""
	I1002 20:51:56.075683  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:56.076064  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:56.076141  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:56.575721  103439 type.go:168] "Request Body" body=""
	I1002 20:51:56.575815  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:56.576144  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:57.075712  103439 type.go:168] "Request Body" body=""
	I1002 20:51:57.075821  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:57.076181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:57.575767  103439 type.go:168] "Request Body" body=""
	I1002 20:51:57.575848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:57.576216  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:58.075841  103439 type.go:168] "Request Body" body=""
	I1002 20:51:58.075920  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:58.076304  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:51:58.076367  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:51:58.576187  103439 type.go:168] "Request Body" body=""
	I1002 20:51:58.576265  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:58.576613  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:59.076311  103439 type.go:168] "Request Body" body=""
	I1002 20:51:59.076391  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:59.076790  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:51:59.576375  103439 type.go:168] "Request Body" body=""
	I1002 20:51:59.576454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:51:59.576812  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:00.075544  103439 type.go:168] "Request Body" body=""
	I1002 20:52:00.075629  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:00.075981  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:00.575537  103439 type.go:168] "Request Body" body=""
	I1002 20:52:00.575633  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:00.576003  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:00.576089  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:01.075618  103439 type.go:168] "Request Body" body=""
	I1002 20:52:01.075698  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:01.076058  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:01.575676  103439 type.go:168] "Request Body" body=""
	I1002 20:52:01.575782  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:01.576133  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:02.075714  103439 type.go:168] "Request Body" body=""
	I1002 20:52:02.075815  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:02.076186  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:02.575783  103439 type.go:168] "Request Body" body=""
	I1002 20:52:02.575871  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:02.576224  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:02.576299  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:03.075796  103439 type.go:168] "Request Body" body=""
	I1002 20:52:03.075881  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:03.076235  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:03.575826  103439 type.go:168] "Request Body" body=""
	I1002 20:52:03.575903  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:03.576282  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:04.075959  103439 type.go:168] "Request Body" body=""
	I1002 20:52:04.076039  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:04.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:04.576109  103439 type.go:168] "Request Body" body=""
	I1002 20:52:04.576183  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:04.576520  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:04.576584  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:05.075455  103439 type.go:168] "Request Body" body=""
	I1002 20:52:05.075532  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:05.075890  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:05.575433  103439 type.go:168] "Request Body" body=""
	I1002 20:52:05.575505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:05.575871  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:06.075440  103439 type.go:168] "Request Body" body=""
	I1002 20:52:06.075523  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:06.075827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:06.575497  103439 type.go:168] "Request Body" body=""
	I1002 20:52:06.575590  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:06.576026  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:07.075591  103439 type.go:168] "Request Body" body=""
	I1002 20:52:07.075672  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:07.076053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:07.076126  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:07.575663  103439 type.go:168] "Request Body" body=""
	I1002 20:52:07.575766  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:07.576128  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:08.075654  103439 type.go:168] "Request Body" body=""
	I1002 20:52:08.075729  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:08.076096  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:08.575925  103439 type.go:168] "Request Body" body=""
	I1002 20:52:08.576003  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:08.576346  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:09.076056  103439 type.go:168] "Request Body" body=""
	I1002 20:52:09.076147  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:09.076530  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:09.076595  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:09.576165  103439 type.go:168] "Request Body" body=""
	I1002 20:52:09.576244  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:09.576584  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:10.075437  103439 type.go:168] "Request Body" body=""
	I1002 20:52:10.075510  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:10.075873  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:10.575468  103439 type.go:168] "Request Body" body=""
	I1002 20:52:10.575558  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:10.575906  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:11.075492  103439 type.go:168] "Request Body" body=""
	I1002 20:52:11.075568  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:11.075940  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:11.575529  103439 type.go:168] "Request Body" body=""
	I1002 20:52:11.575621  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:11.575986  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:11.576046  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:12.075605  103439 type.go:168] "Request Body" body=""
	I1002 20:52:12.075682  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:12.076073  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:12.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:52:12.575763  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:12.576125  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:13.075722  103439 type.go:168] "Request Body" body=""
	I1002 20:52:13.075828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:13.076171  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:13.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:52:13.575836  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:13.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:13.576254  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:14.075831  103439 type.go:168] "Request Body" body=""
	I1002 20:52:14.075921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:14.076324  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:14.575966  103439 type.go:168] "Request Body" body=""
	I1002 20:52:14.576045  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:14.576396  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:15.076397  103439 type.go:168] "Request Body" body=""
	I1002 20:52:15.076484  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:15.076845  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:15.575989  103439 type.go:168] "Request Body" body=""
	I1002 20:52:15.576066  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:15.576461  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:15.576526  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:16.076140  103439 type.go:168] "Request Body" body=""
	I1002 20:52:16.076235  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:16.076620  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:16.576345  103439 type.go:168] "Request Body" body=""
	I1002 20:52:16.576420  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:16.576818  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:17.075412  103439 type.go:168] "Request Body" body=""
	I1002 20:52:17.075504  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:17.075868  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:17.575510  103439 type.go:168] "Request Body" body=""
	I1002 20:52:17.575592  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:17.575975  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:18.075585  103439 type.go:168] "Request Body" body=""
	I1002 20:52:18.075665  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:18.076061  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:18.076136  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:18.575985  103439 type.go:168] "Request Body" body=""
	I1002 20:52:18.576059  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:18.576415  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:19.076058  103439 type.go:168] "Request Body" body=""
	I1002 20:52:19.076159  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:19.076526  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:19.576216  103439 type.go:168] "Request Body" body=""
	I1002 20:52:19.576306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:19.576656  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:20.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:20.075668  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:20.076037  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:20.575615  103439 type.go:168] "Request Body" body=""
	I1002 20:52:20.575692  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:20.576056  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:20.576123  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:21.075653  103439 type.go:168] "Request Body" body=""
	I1002 20:52:21.075760  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:21.076104  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:21.575691  103439 type.go:168] "Request Body" body=""
	I1002 20:52:21.575787  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:21.576159  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:22.075710  103439 type.go:168] "Request Body" body=""
	I1002 20:52:22.075808  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:22.076168  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:22.575725  103439 type.go:168] "Request Body" body=""
	I1002 20:52:22.575823  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:22.576174  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:22.576239  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:23.075794  103439 type.go:168] "Request Body" body=""
	I1002 20:52:23.075868  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:23.076225  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:23.575463  103439 type.go:168] "Request Body" body=""
	I1002 20:52:23.575550  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:23.575980  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:24.075592  103439 type.go:168] "Request Body" body=""
	I1002 20:52:24.075681  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:24.076032  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:24.575657  103439 type.go:168] "Request Body" body=""
	I1002 20:52:24.575768  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:24.576132  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:25.075932  103439 type.go:168] "Request Body" body=""
	I1002 20:52:25.076017  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:25.076379  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:25.076450  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:25.576068  103439 type.go:168] "Request Body" body=""
	I1002 20:52:25.576165  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:25.576567  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:26.076267  103439 type.go:168] "Request Body" body=""
	I1002 20:52:26.076346  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:26.076713  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:26.576395  103439 type.go:168] "Request Body" body=""
	I1002 20:52:26.576472  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:26.576858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:27.075411  103439 type.go:168] "Request Body" body=""
	I1002 20:52:27.075491  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:27.075850  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:27.575491  103439 type.go:168] "Request Body" body=""
	I1002 20:52:27.575573  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:27.575964  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:27.576028  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:28.075504  103439 type.go:168] "Request Body" body=""
	I1002 20:52:28.075596  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:28.075950  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:28.575839  103439 type.go:168] "Request Body" body=""
	I1002 20:52:28.576029  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:28.576476  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:29.075757  103439 type.go:168] "Request Body" body=""
	I1002 20:52:29.075848  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:29.076242  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:29.575836  103439 type.go:168] "Request Body" body=""
	I1002 20:52:29.575917  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:29.576348  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:29.576430  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:30.076283  103439 type.go:168] "Request Body" body=""
	I1002 20:52:30.076376  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:30.076774  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:30.575345  103439 type.go:168] "Request Body" body=""
	I1002 20:52:30.575422  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:30.575772  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:31.075417  103439 type.go:168] "Request Body" body=""
	I1002 20:52:31.075490  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:31.075917  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:31.575405  103439 type.go:168] "Request Body" body=""
	I1002 20:52:31.575482  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:31.575879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:32.075416  103439 type.go:168] "Request Body" body=""
	I1002 20:52:32.075492  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:32.075830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:32.075891  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:32.575384  103439 type.go:168] "Request Body" body=""
	I1002 20:52:32.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:32.575860  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:33.075424  103439 type.go:168] "Request Body" body=""
	I1002 20:52:33.075505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:33.075919  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:33.575575  103439 type.go:168] "Request Body" body=""
	I1002 20:52:33.575659  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:33.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:34.075603  103439 type.go:168] "Request Body" body=""
	I1002 20:52:34.075689  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:34.076059  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:34.076133  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:34.575643  103439 type.go:168] "Request Body" body=""
	I1002 20:52:34.575717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:34.576097  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:35.075919  103439 type.go:168] "Request Body" body=""
	I1002 20:52:35.076001  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:35.076401  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:35.576097  103439 type.go:168] "Request Body" body=""
	I1002 20:52:35.576190  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:35.576569  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:36.076242  103439 type.go:168] "Request Body" body=""
	I1002 20:52:36.076321  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:36.076684  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:36.076771  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:36.576350  103439 type.go:168] "Request Body" body=""
	I1002 20:52:36.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:36.576806  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:37.075371  103439 type.go:168] "Request Body" body=""
	I1002 20:52:37.075445  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:37.075830  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:37.575379  103439 type.go:168] "Request Body" body=""
	I1002 20:52:37.575458  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:37.575827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:38.075420  103439 type.go:168] "Request Body" body=""
	I1002 20:52:38.075494  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:38.075864  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:38.575408  103439 type.go:168] "Request Body" body=""
	I1002 20:52:38.575505  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:38.575831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:38.575904  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:39.075468  103439 type.go:168] "Request Body" body=""
	I1002 20:52:39.075555  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:39.075908  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:39.575486  103439 type.go:168] "Request Body" body=""
	I1002 20:52:39.575564  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:39.575943  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:40.075840  103439 type.go:168] "Request Body" body=""
	I1002 20:52:40.075937  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:40.076335  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:40.576013  103439 type.go:168] "Request Body" body=""
	I1002 20:52:40.576104  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:40.576440  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:40.576500  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:41.076194  103439 type.go:168] "Request Body" body=""
	I1002 20:52:41.076306  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:41.076712  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:41.575323  103439 type.go:168] "Request Body" body=""
	I1002 20:52:41.575412  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:41.575799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:42.075383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:42.075484  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:42.075843  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:42.575392  103439 type.go:168] "Request Body" body=""
	I1002 20:52:42.575469  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:42.575828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:43.075519  103439 type.go:168] "Request Body" body=""
	I1002 20:52:43.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:43.076045  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:43.076121  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:43.575640  103439 type.go:168] "Request Body" body=""
	I1002 20:52:43.575711  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:43.576105  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:44.075717  103439 type.go:168] "Request Body" body=""
	I1002 20:52:44.075847  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:44.076211  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:44.575828  103439 type.go:168] "Request Body" body=""
	I1002 20:52:44.575911  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:44.576256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:45.076131  103439 type.go:168] "Request Body" body=""
	I1002 20:52:45.076212  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:45.076558  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:45.076640  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:45.576225  103439 type.go:168] "Request Body" body=""
	I1002 20:52:45.576305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:45.576652  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:46.076299  103439 type.go:168] "Request Body" body=""
	I1002 20:52:46.076380  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:46.076766  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:46.575344  103439 type.go:168] "Request Body" body=""
	I1002 20:52:46.575417  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:46.575789  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:47.075373  103439 type.go:168] "Request Body" body=""
	I1002 20:52:47.075452  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:47.075833  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:47.575383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:47.575467  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:47.575823  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:47.575904  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:48.075383  103439 type.go:168] "Request Body" body=""
	I1002 20:52:48.075461  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:48.075828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:48.575654  103439 type.go:168] "Request Body" body=""
	I1002 20:52:48.575753  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:48.576167  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:49.075788  103439 type.go:168] "Request Body" body=""
	I1002 20:52:49.075878  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:49.076256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:49.575841  103439 type.go:168] "Request Body" body=""
	I1002 20:52:49.575931  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:49.576281  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:49.576341  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:50.076152  103439 type.go:168] "Request Body" body=""
	I1002 20:52:50.076231  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:50.076577  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:50.576298  103439 type.go:168] "Request Body" body=""
	I1002 20:52:50.576372  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:50.576726  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:51.075356  103439 type.go:168] "Request Body" body=""
	I1002 20:52:51.075442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:51.075828  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:51.575458  103439 type.go:168] "Request Body" body=""
	I1002 20:52:51.575551  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:51.575985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:52.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:52.075659  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:52.076041  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:52.076130  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:52.575624  103439 type.go:168] "Request Body" body=""
	I1002 20:52:52.575701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:52.576057  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:53.075653  103439 type.go:168] "Request Body" body=""
	I1002 20:52:53.075728  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:53.076123  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:53.575676  103439 type.go:168] "Request Body" body=""
	I1002 20:52:53.575779  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:53.576133  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:54.075709  103439 type.go:168] "Request Body" body=""
	I1002 20:52:54.075829  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:54.076213  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:54.076292  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:54.575795  103439 type.go:168] "Request Body" body=""
	I1002 20:52:54.575875  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:54.576247  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:55.076140  103439 type.go:168] "Request Body" body=""
	I1002 20:52:55.076229  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:55.076568  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:55.576341  103439 type.go:168] "Request Body" body=""
	I1002 20:52:55.576431  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:55.576817  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:56.075357  103439 type.go:168] "Request Body" body=""
	I1002 20:52:56.075448  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:56.075831  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:56.575413  103439 type.go:168] "Request Body" body=""
	I1002 20:52:56.575503  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:56.575861  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:56.575933  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:57.075427  103439 type.go:168] "Request Body" body=""
	I1002 20:52:57.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:57.076006  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:57.575579  103439 type.go:168] "Request Body" body=""
	I1002 20:52:57.575653  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:57.576016  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:58.075581  103439 type.go:168] "Request Body" body=""
	I1002 20:52:58.075671  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:58.076062  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:58.575986  103439 type.go:168] "Request Body" body=""
	I1002 20:52:58.576070  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:58.576405  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:52:58.576463  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:52:59.076072  103439 type.go:168] "Request Body" body=""
	I1002 20:52:59.076176  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:59.076539  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:52:59.576174  103439 type.go:168] "Request Body" body=""
	I1002 20:52:59.576247  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:52:59.576606  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:00.075451  103439 type.go:168] "Request Body" body=""
	I1002 20:53:00.075535  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:00.075944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:00.575527  103439 type.go:168] "Request Body" body=""
	I1002 20:53:00.575613  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:00.576021  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:01.075639  103439 type.go:168] "Request Body" body=""
	I1002 20:53:01.075720  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:01.076158  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:01.076236  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:01.575757  103439 type.go:168] "Request Body" body=""
	I1002 20:53:01.575840  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:01.576224  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:02.075855  103439 type.go:168] "Request Body" body=""
	I1002 20:53:02.075943  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:02.076346  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:02.576050  103439 type.go:168] "Request Body" body=""
	I1002 20:53:02.576149  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:02.576502  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:03.076160  103439 type.go:168] "Request Body" body=""
	I1002 20:53:03.076234  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:03.076597  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:03.076676  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:03.575963  103439 type.go:168] "Request Body" body=""
	I1002 20:53:03.576036  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:03.576386  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:04.076077  103439 type.go:168] "Request Body" body=""
	I1002 20:53:04.076167  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:04.076509  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:04.576256  103439 type.go:168] "Request Body" body=""
	I1002 20:53:04.576341  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:04.576710  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:05.075500  103439 type.go:168] "Request Body" body=""
	I1002 20:53:05.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:05.076015  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:05.575620  103439 type.go:168] "Request Body" body=""
	I1002 20:53:05.575699  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:05.576053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:05.576126  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:06.075659  103439 type.go:168] "Request Body" body=""
	I1002 20:53:06.075778  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:06.076160  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:06.575713  103439 type.go:168] "Request Body" body=""
	I1002 20:53:06.575808  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:06.576161  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:07.075791  103439 type.go:168] "Request Body" body=""
	I1002 20:53:07.075896  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:07.076278  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:07.575857  103439 type.go:168] "Request Body" body=""
	I1002 20:53:07.575932  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:07.576289  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:07.576361  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:08.075859  103439 type.go:168] "Request Body" body=""
	I1002 20:53:08.075955  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:08.076329  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:08.576047  103439 type.go:168] "Request Body" body=""
	I1002 20:53:08.576136  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:08.576492  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:09.076119  103439 type.go:168] "Request Body" body=""
	I1002 20:53:09.076215  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:09.076582  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:09.576306  103439 type.go:168] "Request Body" body=""
	I1002 20:53:09.576382  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:09.576707  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:09.576802  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:10.075438  103439 type.go:168] "Request Body" body=""
	I1002 20:53:10.075516  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:10.075948  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:10.575530  103439 type.go:168] "Request Body" body=""
	I1002 20:53:10.575609  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:10.575983  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:11.075661  103439 type.go:168] "Request Body" body=""
	I1002 20:53:11.075769  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:11.076130  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:11.575757  103439 type.go:168] "Request Body" body=""
	I1002 20:53:11.575830  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:11.576189  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:12.075811  103439 type.go:168] "Request Body" body=""
	I1002 20:53:12.075891  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:12.076252  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:12.076323  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:12.575823  103439 type.go:168] "Request Body" body=""
	I1002 20:53:12.575896  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:12.576250  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:13.075897  103439 type.go:168] "Request Body" body=""
	I1002 20:53:13.075987  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:13.076391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:13.576059  103439 type.go:168] "Request Body" body=""
	I1002 20:53:13.576149  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:13.576497  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:14.076230  103439 type.go:168] "Request Body" body=""
	I1002 20:53:14.076305  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:14.076648  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:14.076724  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:14.576300  103439 type.go:168] "Request Body" body=""
	I1002 20:53:14.576375  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:14.576711  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:15.075457  103439 type.go:168] "Request Body" body=""
	I1002 20:53:15.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:15.075942  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:15.575476  103439 type.go:168] "Request Body" body=""
	I1002 20:53:15.575564  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:15.575928  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:16.075498  103439 type.go:168] "Request Body" body=""
	I1002 20:53:16.075597  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:16.075974  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:16.575530  103439 type.go:168] "Request Body" body=""
	I1002 20:53:16.575607  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:16.575990  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:16.576057  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:17.075599  103439 type.go:168] "Request Body" body=""
	I1002 20:53:17.075683  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:17.076066  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:17.575633  103439 type.go:168] "Request Body" body=""
	I1002 20:53:17.575706  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:17.576088  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:18.075675  103439 type.go:168] "Request Body" body=""
	I1002 20:53:18.075775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:18.076143  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:18.575997  103439 type.go:168] "Request Body" body=""
	I1002 20:53:18.576068  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:18.576432  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:18.576492  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:19.076147  103439 type.go:168] "Request Body" body=""
	I1002 20:53:19.076228  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:19.076589  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:19.576248  103439 type.go:168] "Request Body" body=""
	I1002 20:53:19.576332  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:19.576675  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:20.075447  103439 type.go:168] "Request Body" body=""
	I1002 20:53:20.075529  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:20.075898  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:20.575465  103439 type.go:168] "Request Body" body=""
	I1002 20:53:20.575538  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:20.575923  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:21.075521  103439 type.go:168] "Request Body" body=""
	I1002 20:53:21.075619  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:21.075978  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:21.076044  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:21.575665  103439 type.go:168] "Request Body" body=""
	I1002 20:53:21.575775  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:21.576181  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:22.075717  103439 type.go:168] "Request Body" body=""
	I1002 20:53:22.075828  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:22.076183  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:22.575808  103439 type.go:168] "Request Body" body=""
	I1002 20:53:22.575897  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:22.576256  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:23.075928  103439 type.go:168] "Request Body" body=""
	I1002 20:53:23.076009  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:23.076405  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:23.076478  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:23.576093  103439 type.go:168] "Request Body" body=""
	I1002 20:53:23.576168  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:23.576558  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:24.076203  103439 type.go:168] "Request Body" body=""
	I1002 20:53:24.076290  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:24.076643  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:24.576321  103439 type.go:168] "Request Body" body=""
	I1002 20:53:24.576404  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:24.576814  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:25.075708  103439 type.go:168] "Request Body" body=""
	I1002 20:53:25.075822  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:25.076180  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:25.575791  103439 type.go:168] "Request Body" body=""
	I1002 20:53:25.575873  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:25.576263  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:25.576328  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:26.075894  103439 type.go:168] "Request Body" body=""
	I1002 20:53:26.075978  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:26.076323  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:26.576003  103439 type.go:168] "Request Body" body=""
	I1002 20:53:26.576076  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:26.576445  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:27.076142  103439 type.go:168] "Request Body" body=""
	I1002 20:53:27.076232  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:27.076600  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:27.576241  103439 type.go:168] "Request Body" body=""
	I1002 20:53:27.576332  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:27.576701  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:27.576806  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:28.076370  103439 type.go:168] "Request Body" body=""
	I1002 20:53:28.076473  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:28.076858  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:28.575697  103439 type.go:168] "Request Body" body=""
	I1002 20:53:28.575806  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:28.576163  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:29.075772  103439 type.go:168] "Request Body" body=""
	I1002 20:53:29.075851  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:29.076254  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:29.575812  103439 type.go:168] "Request Body" body=""
	I1002 20:53:29.575887  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:29.576260  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:30.076121  103439 type.go:168] "Request Body" body=""
	I1002 20:53:30.076195  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:30.076543  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:30.076603  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:30.576211  103439 type.go:168] "Request Body" body=""
	I1002 20:53:30.576293  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:30.576650  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:31.076346  103439 type.go:168] "Request Body" body=""
	I1002 20:53:31.076423  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:31.076802  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:31.575356  103439 type.go:168] "Request Body" body=""
	I1002 20:53:31.575434  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:31.575808  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:32.075359  103439 type.go:168] "Request Body" body=""
	I1002 20:53:32.075437  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:32.075799  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:32.575336  103439 type.go:168] "Request Body" body=""
	I1002 20:53:32.575410  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:32.575777  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:32.575837  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:33.075392  103439 type.go:168] "Request Body" body=""
	I1002 20:53:33.075475  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:33.075865  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:33.575440  103439 type.go:168] "Request Body" body=""
	I1002 20:53:33.575517  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:33.575846  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:34.075534  103439 type.go:168] "Request Body" body=""
	I1002 20:53:34.075612  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:34.075996  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:34.575566  103439 type.go:168] "Request Body" body=""
	I1002 20:53:34.575655  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:34.576020  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:34.576093  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:35.075839  103439 type.go:168] "Request Body" body=""
	I1002 20:53:35.075921  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:35.076292  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:35.575879  103439 type.go:168] "Request Body" body=""
	I1002 20:53:35.575953  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:35.576311  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:36.075998  103439 type.go:168] "Request Body" body=""
	I1002 20:53:36.076095  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:36.076469  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:36.576150  103439 type.go:168] "Request Body" body=""
	I1002 20:53:36.576229  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:36.576577  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:36.576639  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:37.076335  103439 type.go:168] "Request Body" body=""
	I1002 20:53:37.076417  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:37.076801  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:37.575377  103439 type.go:168] "Request Body" body=""
	I1002 20:53:37.575453  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:37.575879  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:38.075474  103439 type.go:168] "Request Body" body=""
	I1002 20:53:38.075548  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:38.075957  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:38.575859  103439 type.go:168] "Request Body" body=""
	I1002 20:53:38.575935  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:38.576296  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:39.076017  103439 type.go:168] "Request Body" body=""
	I1002 20:53:39.076111  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:39.076475  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:39.076596  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:39.576181  103439 type.go:168] "Request Body" body=""
	I1002 20:53:39.576257  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:39.576614  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:40.075456  103439 type.go:168] "Request Body" body=""
	I1002 20:53:40.075533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:40.075956  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:40.575509  103439 type.go:168] "Request Body" body=""
	I1002 20:53:40.575586  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:40.575951  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:41.075524  103439 type.go:168] "Request Body" body=""
	I1002 20:53:41.075607  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:41.075983  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:41.575591  103439 type.go:168] "Request Body" body=""
	I1002 20:53:41.575678  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:41.576049  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:41.576118  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:42.075648  103439 type.go:168] "Request Body" body=""
	I1002 20:53:42.075731  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:42.076108  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:42.575677  103439 type.go:168] "Request Body" body=""
	I1002 20:53:42.575790  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:42.576150  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:43.075731  103439 type.go:168] "Request Body" body=""
	I1002 20:53:43.075831  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:43.076198  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:43.575889  103439 type.go:168] "Request Body" body=""
	I1002 20:53:43.575972  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:43.576366  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:43.576426  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:44.075602  103439 type.go:168] "Request Body" body=""
	I1002 20:53:44.075701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:44.076125  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:44.575700  103439 type.go:168] "Request Body" body=""
	I1002 20:53:44.575816  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:44.576238  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:45.076167  103439 type.go:168] "Request Body" body=""
	I1002 20:53:45.076247  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:45.076676  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:45.576379  103439 type.go:168] "Request Body" body=""
	I1002 20:53:45.576462  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:45.576855  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:45.576932  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:46.075425  103439 type.go:168] "Request Body" body=""
	I1002 20:53:46.075515  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:46.075882  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:46.575485  103439 type.go:168] "Request Body" body=""
	I1002 20:53:46.575563  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:46.575944  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:47.075576  103439 type.go:168] "Request Body" body=""
	I1002 20:53:47.075649  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:47.076028  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:47.575645  103439 type.go:168] "Request Body" body=""
	I1002 20:53:47.575724  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:47.576173  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:48.075842  103439 type.go:168] "Request Body" body=""
	I1002 20:53:48.075922  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:48.076288  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:48.076360  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:48.576176  103439 type.go:168] "Request Body" body=""
	I1002 20:53:48.576259  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:48.576606  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:49.076289  103439 type.go:168] "Request Body" body=""
	I1002 20:53:49.076364  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:49.076718  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:49.575397  103439 type.go:168] "Request Body" body=""
	I1002 20:53:49.575476  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:49.575864  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:50.075484  103439 type.go:168] "Request Body" body=""
	I1002 20:53:50.075575  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:50.075985  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:50.575634  103439 type.go:168] "Request Body" body=""
	I1002 20:53:50.575725  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:50.576140  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:50.576223  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:51.075766  103439 type.go:168] "Request Body" body=""
	I1002 20:53:51.075855  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:51.076251  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:51.575845  103439 type.go:168] "Request Body" body=""
	I1002 20:53:51.575936  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:51.576310  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:52.076007  103439 type.go:168] "Request Body" body=""
	I1002 20:53:52.076100  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:52.076512  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:52.576200  103439 type.go:168] "Request Body" body=""
	I1002 20:53:52.576311  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:52.576659  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:52.576723  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:53.076346  103439 type.go:168] "Request Body" body=""
	I1002 20:53:53.076426  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:53.076819  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:53.575357  103439 type.go:168] "Request Body" body=""
	I1002 20:53:53.575435  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:53.575822  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:54.075408  103439 type.go:168] "Request Body" body=""
	I1002 20:53:54.075485  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:54.075889  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:54.575457  103439 type.go:168] "Request Body" body=""
	I1002 20:53:54.575534  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:54.575882  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:55.075838  103439 type.go:168] "Request Body" body=""
	I1002 20:53:55.075915  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:55.076266  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:55.076327  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:55.575878  103439 type.go:168] "Request Body" body=""
	I1002 20:53:55.575957  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:55.576307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:56.075931  103439 type.go:168] "Request Body" body=""
	I1002 20:53:56.076017  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:56.076382  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:56.576046  103439 type.go:168] "Request Body" body=""
	I1002 20:53:56.576133  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:56.576476  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:57.076106  103439 type.go:168] "Request Body" body=""
	I1002 20:53:57.076183  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:57.076505  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:57.076565  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:53:57.576226  103439 type.go:168] "Request Body" body=""
	I1002 20:53:57.576298  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:57.576629  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:58.076297  103439 type.go:168] "Request Body" body=""
	I1002 20:53:58.076394  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:58.076731  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:58.575639  103439 type.go:168] "Request Body" body=""
	I1002 20:53:58.575725  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:58.576105  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:59.075691  103439 type.go:168] "Request Body" body=""
	I1002 20:53:59.075862  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:59.076223  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:53:59.575805  103439 type.go:168] "Request Body" body=""
	I1002 20:53:59.575887  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:53:59.576267  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:53:59.576342  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:00.076234  103439 type.go:168] "Request Body" body=""
	I1002 20:54:00.076318  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:00.076665  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:00.576298  103439 type.go:168] "Request Body" body=""
	I1002 20:54:00.576374  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:00.576723  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:01.075366  103439 type.go:168] "Request Body" body=""
	I1002 20:54:01.075454  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:01.075825  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:01.575447  103439 type.go:168] "Request Body" body=""
	I1002 20:54:01.575533  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:01.575904  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:02.075556  103439 type.go:168] "Request Body" body=""
	I1002 20:54:02.075644  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:02.076053  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:02.076132  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:02.575602  103439 type.go:168] "Request Body" body=""
	I1002 20:54:02.575678  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:02.576035  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:03.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:03.075713  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:03.076098  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:03.575655  103439 type.go:168] "Request Body" body=""
	I1002 20:54:03.575732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:03.576098  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:04.075645  103439 type.go:168] "Request Body" body=""
	I1002 20:54:04.075732  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:04.076102  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:04.076162  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:04.575674  103439 type.go:168] "Request Body" body=""
	I1002 20:54:04.575774  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:04.576120  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:05.075981  103439 type.go:168] "Request Body" body=""
	I1002 20:54:05.076063  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:05.076424  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:05.576045  103439 type.go:168] "Request Body" body=""
	I1002 20:54:05.576128  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:05.576498  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:06.076278  103439 type.go:168] "Request Body" body=""
	I1002 20:54:06.076361  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:06.076719  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:06.076815  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:06.575347  103439 type.go:168] "Request Body" body=""
	I1002 20:54:06.575428  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:06.575821  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:07.075435  103439 type.go:168] "Request Body" body=""
	I1002 20:54:07.075516  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:07.075897  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:07.575486  103439 type.go:168] "Request Body" body=""
	I1002 20:54:07.575563  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:07.575958  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:08.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:08.075701  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:08.076060  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:08.575979  103439 type.go:168] "Request Body" body=""
	I1002 20:54:08.576066  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:08.576467  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:08.576529  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:09.076208  103439 type.go:168] "Request Body" body=""
	I1002 20:54:09.076292  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:09.076707  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:09.576320  103439 type.go:168] "Request Body" body=""
	I1002 20:54:09.576395  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:09.576817  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:10.075592  103439 type.go:168] "Request Body" body=""
	I1002 20:54:10.075669  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:10.076036  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:10.575606  103439 type.go:168] "Request Body" body=""
	I1002 20:54:10.575688  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:10.576056  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:11.075680  103439 type.go:168] "Request Body" body=""
	I1002 20:54:11.075788  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:11.076183  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:11.076274  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:11.575788  103439 type.go:168] "Request Body" body=""
	I1002 20:54:11.575870  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:11.576222  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:12.075860  103439 type.go:168] "Request Body" body=""
	I1002 20:54:12.075940  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:12.076307  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:12.575971  103439 type.go:168] "Request Body" body=""
	I1002 20:54:12.576043  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:12.576403  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:13.076171  103439 type.go:168] "Request Body" body=""
	I1002 20:54:13.076258  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:13.076628  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:13.076688  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:13.576261  103439 type.go:168] "Request Body" body=""
	I1002 20:54:13.576339  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:13.576685  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:14.076408  103439 type.go:168] "Request Body" body=""
	I1002 20:54:14.076488  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:14.076857  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:14.575484  103439 type.go:168] "Request Body" body=""
	I1002 20:54:14.575582  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:14.575948  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:15.075808  103439 type.go:168] "Request Body" body=""
	I1002 20:54:15.075891  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:15.076275  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:15.575894  103439 type.go:168] "Request Body" body=""
	I1002 20:54:15.575975  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:15.576435  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:15.576516  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:16.076119  103439 type.go:168] "Request Body" body=""
	I1002 20:54:16.076226  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:16.076603  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:16.576326  103439 type.go:168] "Request Body" body=""
	I1002 20:54:16.576403  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:16.576788  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:17.075351  103439 type.go:168] "Request Body" body=""
	I1002 20:54:17.075430  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:17.075787  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:17.575401  103439 type.go:168] "Request Body" body=""
	I1002 20:54:17.575559  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:17.575961  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:18.075538  103439 type.go:168] "Request Body" body=""
	I1002 20:54:18.075619  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:18.075997  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:18.076063  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:18.575954  103439 type.go:168] "Request Body" body=""
	I1002 20:54:18.576031  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:18.576391  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:19.076057  103439 type.go:168] "Request Body" body=""
	I1002 20:54:19.076145  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:19.076521  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:19.576266  103439 type.go:168] "Request Body" body=""
	I1002 20:54:19.576354  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:19.576728  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:20.075522  103439 type.go:168] "Request Body" body=""
	I1002 20:54:20.075613  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:20.075992  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:20.575620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:20.575699  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:20.576111  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:20.576172  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:21.075690  103439 type.go:168] "Request Body" body=""
	I1002 20:54:21.075834  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:21.076211  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:21.575853  103439 type.go:168] "Request Body" body=""
	I1002 20:54:21.575938  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:21.576327  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:22.076012  103439 type.go:168] "Request Body" body=""
	I1002 20:54:22.076106  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:22.076455  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:22.576180  103439 type.go:168] "Request Body" body=""
	I1002 20:54:22.576267  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:22.576639  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:22.576703  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:23.076280  103439 type.go:168] "Request Body" body=""
	I1002 20:54:23.076362  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:23.076729  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:23.575332  103439 type.go:168] "Request Body" body=""
	I1002 20:54:23.575409  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:23.575788  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:24.075381  103439 type.go:168] "Request Body" body=""
	I1002 20:54:24.075455  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:24.075827  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:24.575436  103439 type.go:168] "Request Body" body=""
	I1002 20:54:24.575524  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:24.575897  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:25.075680  103439 type.go:168] "Request Body" body=""
	I1002 20:54:25.075782  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:25.076141  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:25.076204  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:25.575730  103439 type.go:168] "Request Body" body=""
	I1002 20:54:25.575836  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:25.576238  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:26.075827  103439 type.go:168] "Request Body" body=""
	I1002 20:54:26.075905  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:26.076277  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:26.576092  103439 type.go:168] "Request Body" body=""
	I1002 20:54:26.576245  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:26.576650  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:27.076357  103439 type.go:168] "Request Body" body=""
	I1002 20:54:27.076442  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:27.076807  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1002 20:54:27.076864  103439 node_ready.go:55] error getting node "functional-012915" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-012915": dial tcp 192.168.49.2:8441: connect: connection refused
	I1002 20:54:27.575463  103439 type.go:168] "Request Body" body=""
	I1002 20:54:27.575541  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:27.576016  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:28.075620  103439 type.go:168] "Request Body" body=""
	I1002 20:54:28.075717  103439 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-012915" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	 >
	I1002 20:54:28.076117  103439 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1002 20:54:28.576130  103439 type.go:168] "Request Body" body=""
	I1002 20:54:28.576214  103439 node_ready.go:38] duration metric: took 6m0.001003861s for node "functional-012915" to be "Ready" ...
	I1002 20:54:28.579396  103439 out.go:203] 
	W1002 20:54:28.581273  103439 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 20:54:28.581294  103439 out.go:285] * 
	W1002 20:54:28.583020  103439 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 20:54:28.584974  103439 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.132348383Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=fa53cd29-447a-42e9-b93a-25a28e68ae7a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.426552097Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f27855ae-5e09-49c2-9180-f4f95314b986 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.426684113Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f27855ae-5e09-49c2-9180-f4f95314b986 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.426721338Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f27855ae-5e09-49c2-9180-f4f95314b986 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.864572409Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=bd2fdb0c-ee85-4f1f-ba1f-4d410640930b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.864779357Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=bd2fdb0c-ee85-4f1f-ba1f-4d410640930b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.864827587Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=bd2fdb0c-ee85-4f1f-ba1f-4d410640930b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.890484237Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=19fbbaa0-e85b-4b5c-a1a4-08e281d8148a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.890631905Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=19fbbaa0-e85b-4b5c-a1a4-08e281d8148a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.890675201Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=19fbbaa0-e85b-4b5c-a1a4-08e281d8148a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.914567456Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=8dc96eba-012b-41ac-a775-db61a6107b2f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.914714164Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=8dc96eba-012b-41ac-a775-db61a6107b2f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:38 functional-012915 crio[2919]: time="2025-10-02T20:54:38.914774629Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=8dc96eba-012b-41ac-a775-db61a6107b2f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:39 functional-012915 crio[2919]: time="2025-10-02T20:54:39.38454262Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=042112ea-c568-48d2-8cce-9b4035e7b4d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:40 functional-012915 crio[2919]: time="2025-10-02T20:54:40.855570746Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ceaff541-918e-45c6-9c77-b7ef3ace8a20 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:40 functional-012915 crio[2919]: time="2025-10-02T20:54:40.85665554Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=74a1f9b4-c595-4f35-a980-9f15da0e835b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 20:54:40 functional-012915 crio[2919]: time="2025-10-02T20:54:40.857620566Z" level=info msg="Creating container: kube-system/etcd-functional-012915/etcd" id=34dd7490-9da7-421d-a4fe-55e98819b9e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:40 functional-012915 crio[2919]: time="2025-10-02T20:54:40.857847235Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:40 functional-012915 crio[2919]: time="2025-10-02T20:54:40.861721185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:40 functional-012915 crio[2919]: time="2025-10-02T20:54:40.862295643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 20:54:40 functional-012915 crio[2919]: time="2025-10-02T20:54:40.879442264Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=34dd7490-9da7-421d-a4fe-55e98819b9e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:40 functional-012915 crio[2919]: time="2025-10-02T20:54:40.881301102Z" level=info msg="createCtr: deleting container ID 5276730f09a16f1a890c2b6f3b4db9c77b69316f332db7e98d2d31e524dc0af5 from idIndex" id=34dd7490-9da7-421d-a4fe-55e98819b9e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:40 functional-012915 crio[2919]: time="2025-10-02T20:54:40.88135289Z" level=info msg="createCtr: removing container 5276730f09a16f1a890c2b6f3b4db9c77b69316f332db7e98d2d31e524dc0af5" id=34dd7490-9da7-421d-a4fe-55e98819b9e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:40 functional-012915 crio[2919]: time="2025-10-02T20:54:40.881395763Z" level=info msg="createCtr: deleting container 5276730f09a16f1a890c2b6f3b4db9c77b69316f332db7e98d2d31e524dc0af5 from storage" id=34dd7490-9da7-421d-a4fe-55e98819b9e4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 20:54:40 functional-012915 crio[2919]: time="2025-10-02T20:54:40.88378689Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-012915_kube-system_d8a261ecdc32dae77705c4d6c0276f2f_0" id=34dd7490-9da7-421d-a4fe-55e98819b9e4 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:54:42.956228    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:42.956836    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:42.958492    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:42.958976    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:54:42.960577    5426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 20:54:42 up  2:37,  0 user,  load average: 0.64, 0.17, 0.36
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 20:54:32 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:54:32 functional-012915 kubelet[1773]: E1002 20:54:32.883008    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-012915" podUID="8a66ab49d7c80b396ab0e8b46c39b696"
	Oct 02 20:54:32 functional-012915 kubelet[1773]: E1002 20:54:32.897258    1773 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-012915\" not found"
	Oct 02 20:54:33 functional-012915 kubelet[1773]: E1002 20:54:33.854495    1773 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 20:54:33 functional-012915 kubelet[1773]: E1002 20:54:33.885785    1773 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:54:33 functional-012915 kubelet[1773]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:33 functional-012915 kubelet[1773]:  > podSandboxID="81cb2ca5ac7acf1d0ec52dc7e36a2ebe21590776e2855b6e5546c94b7dad3e89"
	Oct 02 20:54:33 functional-012915 kubelet[1773]: E1002 20:54:33.885933    1773 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:54:33 functional-012915 kubelet[1773]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-012915_kube-system(7e750209f40bc1241cc38d19476e612c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:33 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:54:33 functional-012915 kubelet[1773]: E1002 20:54:33.885985    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-012915" podUID="7e750209f40bc1241cc38d19476e612c"
	Oct 02 20:54:37 functional-012915 kubelet[1773]: E1002 20:54:37.540254    1773 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 20:54:37 functional-012915 kubelet[1773]: I1002 20:54:37.745682    1773 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 20:54:37 functional-012915 kubelet[1773]: E1002 20:54:37.746097    1773 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 20:54:40 functional-012915 kubelet[1773]: E1002 20:54:40.320317    1773 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-012915.186ac76a13674072\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac76a13674072  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 20:44:22.84759461 +0000 UTC m=+0.324743301,LastTimestamp:2025-10-02 20:44:22.84910367 +0000 UTC m=+0.326252362,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingIn
stance:functional-012915,}"
	Oct 02 20:54:40 functional-012915 kubelet[1773]: E1002 20:54:40.855056    1773 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 20:54:40 functional-012915 kubelet[1773]: E1002 20:54:40.884110    1773 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 20:54:40 functional-012915 kubelet[1773]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:40 functional-012915 kubelet[1773]:  > podSandboxID="585b4230bcb56046e825d4238227e61b36dc2e8921ea6147c622b6bed61a91bf"
	Oct 02 20:54:40 functional-012915 kubelet[1773]: E1002 20:54:40.884209    1773 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 20:54:40 functional-012915 kubelet[1773]:         container etcd start failed in pod etcd-functional-012915_kube-system(d8a261ecdc32dae77705c4d6c0276f2f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 20:54:40 functional-012915 kubelet[1773]:  > logger="UnhandledError"
	Oct 02 20:54:40 functional-012915 kubelet[1773]: E1002 20:54:40.884246    1773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-012915" podUID="d8a261ecdc32dae77705c4d6c0276f2f"
	Oct 02 20:54:41 functional-012915 kubelet[1773]: E1002 20:54:41.334992    1773 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 20:54:42 functional-012915 kubelet[1773]: E1002 20:54:42.898023    1773 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-012915\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (325.163882ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (733.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012915 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-012915 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (12m12.071413379s)

                                                
                                                
-- stdout --
	* [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000945383s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000318497s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00035696s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000784779s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-amd64 start -p functional-012915 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:776: restart took 12m12.073644992s for "functional-012915" cluster.
I1002 21:06:55.873545   84100 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (295.485656ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ delete  │ -p nospam-461767                                                                                              │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ start   │ -p functional-012915 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │                     │
	│ start   │ -p functional-012915 --alsologtostderr -v=8                                                                   │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:48 UTC │                     │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:3.1                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:3.3                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:latest                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add minikube-local-cache-test:functional-012915                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache delete minikube-local-cache-test:functional-012915                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl images                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	│ cache   │ functional-012915 cache reload                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ kubectl │ functional-012915 kubectl -- --context functional-012915 get pods                                             │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	│ start   │ -p functional-012915 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:54:43
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:54:43.844587  109844 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:54:43.844861  109844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:43.844865  109844 out.go:374] Setting ErrFile to fd 2...
	I1002 20:54:43.844868  109844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:43.845038  109844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:54:43.845491  109844 out.go:368] Setting JSON to false
	I1002 20:54:43.846405  109844 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9425,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:54:43.846500  109844 start.go:140] virtualization: kvm guest
	I1002 20:54:43.848999  109844 out.go:179] * [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:54:43.850877  109844 notify.go:220] Checking for updates...
	I1002 20:54:43.850921  109844 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:54:43.852793  109844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:54:43.854834  109844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:54:43.856692  109844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:54:43.858365  109844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:54:43.860403  109844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:54:43.863103  109844 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:54:43.863204  109844 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:54:43.889469  109844 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:54:43.889551  109844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:43.945234  109844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:54:43.934776618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:54:43.945360  109844 docker.go:318] overlay module found
	I1002 20:54:43.947426  109844 out.go:179] * Using the docker driver based on existing profile
	I1002 20:54:43.949164  109844 start.go:304] selected driver: docker
	I1002 20:54:43.949174  109844 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:43.949277  109844 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:54:43.949355  109844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:44.006056  109844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:54:43.996347889 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:54:44.006730  109844 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:54:44.006766  109844 cni.go:84] Creating CNI manager for ""
	I1002 20:54:44.006828  109844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:54:44.006872  109844 start.go:348] cluster config:
	{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:44.008980  109844 out.go:179] * Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	I1002 20:54:44.010355  109844 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 20:54:44.011760  109844 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:54:44.012903  109844 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:54:44.012938  109844 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:54:44.012951  109844 cache.go:58] Caching tarball of preloaded images
	I1002 20:54:44.012993  109844 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:54:44.013033  109844 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:54:44.013038  109844 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:54:44.013135  109844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json ...
	I1002 20:54:44.033578  109844 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:54:44.033589  109844 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:54:44.033606  109844 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:54:44.033634  109844 start.go:360] acquireMachinesLock for functional-012915: {Name:mk05b0465db6f8234fcb55c21a78a37886923b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:54:44.033690  109844 start.go:364] duration metric: took 42.12µs to acquireMachinesLock for "functional-012915"
	I1002 20:54:44.033704  109844 start.go:96] Skipping create...Using existing machine configuration
	I1002 20:54:44.033708  109844 fix.go:54] fixHost starting: 
	I1002 20:54:44.033949  109844 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:54:44.051193  109844 fix.go:112] recreateIfNeeded on functional-012915: state=Running err=<nil>
	W1002 20:54:44.051212  109844 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 20:54:44.053363  109844 out.go:252] * Updating the running docker "functional-012915" container ...
	I1002 20:54:44.053388  109844 machine.go:93] provisionDockerMachine start ...
	I1002 20:54:44.053449  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.071022  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.071263  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.071270  109844 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:54:44.215777  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:54:44.215796  109844 ubuntu.go:182] provisioning hostname "functional-012915"
	I1002 20:54:44.215846  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.233786  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.234003  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.234012  109844 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-012915 && echo "functional-012915" | sudo tee /etc/hostname
	I1002 20:54:44.386648  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:54:44.386732  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.405002  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.405287  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.405300  109844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-012915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-012915/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-012915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:54:44.550595  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:54:44.550613  109844 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 20:54:44.550630  109844 ubuntu.go:190] setting up certificates
	I1002 20:54:44.550637  109844 provision.go:84] configureAuth start
	I1002 20:54:44.550684  109844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:54:44.568931  109844 provision.go:143] copyHostCerts
	I1002 20:54:44.568985  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 20:54:44.569001  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:54:44.569078  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 20:54:44.569204  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 20:54:44.569210  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:54:44.569250  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 20:54:44.569359  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 20:54:44.569365  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:54:44.569398  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 20:54:44.569559  109844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.functional-012915 san=[127.0.0.1 192.168.49.2 functional-012915 localhost minikube]
	I1002 20:54:44.708488  109844 provision.go:177] copyRemoteCerts
	I1002 20:54:44.708542  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:54:44.708581  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.726299  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:44.828230  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:54:44.845801  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:54:44.864647  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:54:44.886083  109844 provision.go:87] duration metric: took 335.431145ms to configureAuth
	I1002 20:54:44.886105  109844 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:54:44.886322  109844 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:54:44.886449  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.904652  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.904873  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.904882  109844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:54:45.179966  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:54:45.179982  109844 machine.go:96] duration metric: took 1.12658745s to provisionDockerMachine
	I1002 20:54:45.179993  109844 start.go:293] postStartSetup for "functional-012915" (driver="docker")
	I1002 20:54:45.180006  109844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:54:45.180072  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:54:45.180106  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.198206  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.300487  109844 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:54:45.304200  109844 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:54:45.304220  109844 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:54:45.304236  109844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 20:54:45.304298  109844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 20:54:45.304376  109844 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 20:54:45.304448  109844 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> hosts in /etc/test/nested/copy/84100
	I1002 20:54:45.304489  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/84100
	I1002 20:54:45.312033  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:54:45.329488  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts --> /etc/test/nested/copy/84100/hosts (40 bytes)
	I1002 20:54:45.347685  109844 start.go:296] duration metric: took 167.67425ms for postStartSetup
	I1002 20:54:45.347776  109844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:54:45.347829  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.365819  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.465348  109844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:54:45.470042  109844 fix.go:56] duration metric: took 1.436324828s for fixHost
	I1002 20:54:45.470060  109844 start.go:83] releasing machines lock for "functional-012915", held for 1.436363927s
	I1002 20:54:45.470140  109844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:54:45.487689  109844 ssh_runner.go:195] Run: cat /version.json
	I1002 20:54:45.487729  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.487802  109844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:54:45.487851  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.505570  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.507416  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.673212  109844 ssh_runner.go:195] Run: systemctl --version
	I1002 20:54:45.680090  109844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:54:45.716457  109844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:54:45.721126  109844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:54:45.721199  109844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:54:45.729223  109844 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:54:45.729241  109844 start.go:495] detecting cgroup driver to use...
	I1002 20:54:45.729276  109844 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:54:45.729332  109844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:54:45.744221  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:54:45.757221  109844 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:54:45.757262  109844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:54:45.772166  109844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:54:45.785276  109844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:54:45.871303  109844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:54:45.959396  109844 docker.go:234] disabling docker service ...
	I1002 20:54:45.959460  109844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:54:45.974048  109844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:54:45.986376  109844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:54:46.071815  109844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:54:46.159773  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:54:46.172020  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:54:46.186483  109844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:54:46.186540  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.195504  109844 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:54:46.195591  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.205033  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.213732  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.222589  109844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:54:46.230603  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.239758  109844 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.248194  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.256956  109844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:54:46.264263  109844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:54:46.271577  109844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:54:46.354483  109844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:54:46.464818  109844 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:54:46.464871  109844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:54:46.468860  109844 start.go:563] Will wait 60s for crictl version
	I1002 20:54:46.468905  109844 ssh_runner.go:195] Run: which crictl
	I1002 20:54:46.472439  109844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:54:46.496177  109844 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:54:46.496237  109844 ssh_runner.go:195] Run: crio --version
	I1002 20:54:46.524348  109844 ssh_runner.go:195] Run: crio --version
	I1002 20:54:46.554038  109844 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:54:46.555482  109844 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:54:46.572825  109844 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:54:46.579140  109844 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 20:54:46.580455  109844 kubeadm.go:883] updating cluster {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:54:46.580599  109844 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:54:46.580680  109844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:54:46.615204  109844 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:54:46.615216  109844 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:54:46.615259  109844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:54:46.641403  109844 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:54:46.641428  109844 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:54:46.641435  109844 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:54:46.641523  109844 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:54:46.641593  109844 ssh_runner.go:195] Run: crio config
	I1002 20:54:46.685535  109844 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 20:54:46.685558  109844 cni.go:84] Creating CNI manager for ""
	I1002 20:54:46.685570  109844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:54:46.685580  109844 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:54:46.685603  109844 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-012915 NodeName:functional-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:54:46.685708  109844 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-012915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:54:46.685786  109844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:54:46.694168  109844 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:54:46.694220  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:54:46.701920  109844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:54:46.714502  109844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:54:46.726979  109844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 20:54:46.739184  109844 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:54:46.742937  109844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:54:46.828267  109844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:54:46.841290  109844 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915 for IP: 192.168.49.2
	I1002 20:54:46.841302  109844 certs.go:195] generating shared ca certs ...
	I1002 20:54:46.841315  109844 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:54:46.841439  109844 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 20:54:46.841480  109844 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 20:54:46.841486  109844 certs.go:257] generating profile certs ...
	I1002 20:54:46.841556  109844 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key
	I1002 20:54:46.841595  109844 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645
	I1002 20:54:46.841625  109844 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key
	I1002 20:54:46.841728  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 20:54:46.841789  109844 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 20:54:46.841795  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:54:46.841816  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:54:46.841847  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:54:46.841870  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 20:54:46.841921  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:54:46.842546  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:54:46.860833  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:54:46.878996  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:54:46.897504  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:54:46.914816  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:54:46.931903  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:54:46.948901  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:54:46.965859  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:54:46.982982  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 20:54:47.000600  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 20:54:47.018108  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:54:47.035448  109844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:54:47.047886  109844 ssh_runner.go:195] Run: openssl version
	I1002 20:54:47.053789  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 20:54:47.062187  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.066098  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.066148  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.100024  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 20:54:47.108632  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 20:54:47.118249  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.122176  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.122226  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.156807  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:54:47.165260  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:54:47.173954  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.177825  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.177879  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.212057  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:54:47.220716  109844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:54:47.224961  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:54:47.259305  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:54:47.293091  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:54:47.327486  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:54:47.361854  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:54:47.395871  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:54:47.429860  109844 kubeadm.go:400] StartCluster: {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:47.429950  109844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:54:47.429996  109844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:54:47.458514  109844 cri.go:89] found id: ""
	I1002 20:54:47.458565  109844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:54:47.466572  109844 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:54:47.466595  109844 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:54:47.466642  109844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:54:47.473967  109844 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.474578  109844 kubeconfig.go:125] found "functional-012915" server: "https://192.168.49.2:8441"
	I1002 20:54:47.476054  109844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:54:47.483705  109844 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 20:40:16.332502550 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 20:54:46.736875917 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 20:54:47.483713  109844 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:54:47.483724  109844 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 20:54:47.483782  109844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:54:47.509815  109844 cri.go:89] found id: ""
	I1002 20:54:47.509892  109844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:54:47.553124  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:54:47.561262  109844 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 20:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 20:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 20:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 20:44 /etc/kubernetes/scheduler.conf
	
	I1002 20:54:47.561322  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:54:47.569534  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:54:47.577441  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.577491  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:54:47.585032  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:54:47.592533  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.592581  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:54:47.600040  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:54:47.607328  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.607365  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:54:47.614787  109844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:54:47.622401  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:47.663022  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.396196  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.576311  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.625411  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.679287  109844 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:54:48.679369  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:49.179574  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:49.679973  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:50.180317  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:50.680215  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:51.179826  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:51.679618  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:52.180390  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:52.679884  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:53.180480  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:53.679973  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:54.180264  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:54.679704  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:55.179880  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:55.679789  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:56.179784  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:56.679611  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:57.179499  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:57.680068  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:58.179593  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:58.680342  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:59.180363  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:59.679719  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:00.180464  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:00.680219  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:01.179572  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:01.679989  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:02.179867  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:02.680465  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:03.179787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:03.680167  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:04.179791  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:04.679910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:05.179712  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:05.680091  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:06.179473  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:06.680424  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:07.179668  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:07.680232  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:08.180357  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:08.679960  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:09.180406  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:09.679893  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:10.180470  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:10.680102  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:11.180344  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:11.679766  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:12.180348  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:12.679643  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:13.180121  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:13.679815  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:14.179492  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:14.679526  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:15.180454  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:15.679641  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:16.180481  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:16.679596  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:17.179991  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:17.680447  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:18.179814  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:18.679604  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:19.180037  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:19.680355  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:20.180349  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:20.679595  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:21.179952  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:21.680267  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:22.179901  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:22.680376  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:23.180156  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:23.679931  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:24.180000  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:24.680128  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:25.179481  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:25.680099  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:26.180243  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:26.680414  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:27.180290  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:27.680286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:28.179866  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:28.680103  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:29.180483  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:29.680117  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:30.179477  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:30.679634  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:31.180114  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:31.680389  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:32.179833  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:32.679848  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:33.180002  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:33.679520  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:34.180220  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:34.679624  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:35.179932  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:35.679910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:36.180365  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:36.679590  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:37.179548  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:37.680243  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:38.179674  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:38.680191  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:39.179865  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:39.680176  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:40.179534  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:40.679913  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:41.180457  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:41.679626  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:42.179639  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:42.679943  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:43.179573  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:43.680221  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:44.180342  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:44.679876  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:45.180254  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:45.679532  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:46.180286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:46.679433  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:47.179977  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:47.679540  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:48.180382  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:48.679912  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:48.679971  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:48.706989  109844 cri.go:89] found id: ""
	I1002 20:55:48.707014  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.707020  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:48.707025  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:48.707071  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:48.733283  109844 cri.go:89] found id: ""
	I1002 20:55:48.733299  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.733306  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:48.733311  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:48.733361  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:48.761228  109844 cri.go:89] found id: ""
	I1002 20:55:48.761245  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.761250  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:48.761256  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:48.761313  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:48.788501  109844 cri.go:89] found id: ""
	I1002 20:55:48.788516  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.788522  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:48.788527  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:48.788579  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:48.814616  109844 cri.go:89] found id: ""
	I1002 20:55:48.814636  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.814646  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:48.814651  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:48.814703  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:48.841518  109844 cri.go:89] found id: ""
	I1002 20:55:48.841538  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.841548  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:48.841555  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:48.841624  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:48.869254  109844 cri.go:89] found id: ""
	I1002 20:55:48.869278  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.869288  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:48.869311  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:48.869335  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:48.883919  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:48.883937  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:48.941687  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:48.933979    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.935001    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.936618    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.937054    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.938614    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:48.933979    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.935001    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.936618    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.937054    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.938614    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:48.941698  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:48.941710  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:49.007787  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:49.007810  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:49.038133  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:49.038157  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:51.609461  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:51.620229  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:51.620296  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:51.647003  109844 cri.go:89] found id: ""
	I1002 20:55:51.647022  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.647028  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:51.647033  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:51.647087  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:51.673376  109844 cri.go:89] found id: ""
	I1002 20:55:51.673394  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.673402  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:51.673408  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:51.673467  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:51.700685  109844 cri.go:89] found id: ""
	I1002 20:55:51.700701  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.700719  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:51.700724  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:51.700792  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:51.726660  109844 cri.go:89] found id: ""
	I1002 20:55:51.726677  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.726684  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:51.726689  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:51.726762  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:51.753630  109844 cri.go:89] found id: ""
	I1002 20:55:51.753646  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.753652  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:51.753657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:51.753750  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:51.779127  109844 cri.go:89] found id: ""
	I1002 20:55:51.779146  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.779155  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:51.779161  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:51.779235  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:51.805960  109844 cri.go:89] found id: ""
	I1002 20:55:51.805979  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.805988  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:51.805997  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:51.806006  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:51.835916  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:51.835939  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:51.905127  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:51.905159  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:51.920189  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:51.920209  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:51.976010  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:51.969042    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.969686    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971200    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971624    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.973116    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:51.969042    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.969686    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971200    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971624    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.973116    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:51.976023  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:51.976035  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:54.539314  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:54.550248  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:54.550316  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:54.577239  109844 cri.go:89] found id: ""
	I1002 20:55:54.577254  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.577261  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:54.577265  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:54.577311  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:54.603907  109844 cri.go:89] found id: ""
	I1002 20:55:54.603927  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.603935  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:54.603941  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:54.603991  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:54.630527  109844 cri.go:89] found id: ""
	I1002 20:55:54.630543  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.630549  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:54.630562  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:54.630624  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:54.658661  109844 cri.go:89] found id: ""
	I1002 20:55:54.658680  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.658688  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:54.658693  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:54.658774  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:54.684747  109844 cri.go:89] found id: ""
	I1002 20:55:54.684769  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.684807  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:54.684814  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:54.684890  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:54.711715  109844 cri.go:89] found id: ""
	I1002 20:55:54.711732  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.711777  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:54.711785  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:54.711842  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:54.738961  109844 cri.go:89] found id: ""
	I1002 20:55:54.738979  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.738987  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:54.738996  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:54.739009  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:54.806223  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:54.806250  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:54.820749  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:54.820771  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:54.877826  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:54.870974    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.871493    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873132    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873593    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.875041    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:54.870974    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.871493    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873132    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873593    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.875041    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:54.877845  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:54.877872  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:54.943126  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:54.943152  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:57.473420  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:57.484300  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:57.484350  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:57.510256  109844 cri.go:89] found id: ""
	I1002 20:55:57.510274  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.510281  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:57.510285  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:57.510350  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:57.536726  109844 cri.go:89] found id: ""
	I1002 20:55:57.536756  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.536766  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:57.536773  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:57.536824  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:57.562388  109844 cri.go:89] found id: ""
	I1002 20:55:57.562407  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.562416  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:57.562421  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:57.562467  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:57.589542  109844 cri.go:89] found id: ""
	I1002 20:55:57.589569  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.589577  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:57.589582  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:57.589647  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:57.616763  109844 cri.go:89] found id: ""
	I1002 20:55:57.616781  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.616790  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:57.616796  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:57.616842  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:57.642618  109844 cri.go:89] found id: ""
	I1002 20:55:57.642637  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.642646  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:57.642652  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:57.642700  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:57.668671  109844 cri.go:89] found id: ""
	I1002 20:55:57.668686  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.668693  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:57.668700  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:57.668714  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:57.733001  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:57.733023  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:57.747314  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:57.747338  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:57.803286  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:57.796365    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.796951    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.798536    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.799065    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.800640    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:57.796365    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.796951    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.798536    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.799065    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.800640    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:57.803303  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:57.803316  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:57.869484  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:57.869515  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:00.399551  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:00.410170  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:00.410218  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:00.436280  109844 cri.go:89] found id: ""
	I1002 20:56:00.436299  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.436306  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:00.436313  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:00.436368  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:00.463444  109844 cri.go:89] found id: ""
	I1002 20:56:00.463461  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.463467  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:00.463479  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:00.463542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:00.489898  109844 cri.go:89] found id: ""
	I1002 20:56:00.489912  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.489919  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:00.489923  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:00.489970  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:00.516907  109844 cri.go:89] found id: ""
	I1002 20:56:00.516925  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.516932  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:00.516937  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:00.516987  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:00.543495  109844 cri.go:89] found id: ""
	I1002 20:56:00.543512  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.543519  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:00.543524  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:00.543575  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:00.569648  109844 cri.go:89] found id: ""
	I1002 20:56:00.569664  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.569670  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:00.569675  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:00.569722  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:00.596695  109844 cri.go:89] found id: ""
	I1002 20:56:00.596712  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.596719  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:00.596726  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:00.596756  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:00.664900  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:00.664923  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:00.679401  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:00.679420  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:00.736278  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:00.729378    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.729909    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731467    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731953    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.733441    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:00.729378    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.729909    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731467    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731953    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.733441    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:00.736292  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:00.736302  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:00.801067  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:00.801089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:03.333225  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:03.344042  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:03.344094  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:03.370652  109844 cri.go:89] found id: ""
	I1002 20:56:03.370668  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.370675  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:03.370680  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:03.370749  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:03.398592  109844 cri.go:89] found id: ""
	I1002 20:56:03.398609  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.398616  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:03.398621  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:03.398675  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:03.425268  109844 cri.go:89] found id: ""
	I1002 20:56:03.425284  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.425292  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:03.425297  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:03.425348  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:03.451631  109844 cri.go:89] found id: ""
	I1002 20:56:03.451645  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.451651  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:03.451655  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:03.451713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:03.476703  109844 cri.go:89] found id: ""
	I1002 20:56:03.476718  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.476728  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:03.476748  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:03.476804  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:03.502825  109844 cri.go:89] found id: ""
	I1002 20:56:03.502840  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.502847  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:03.502852  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:03.502897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:03.530314  109844 cri.go:89] found id: ""
	I1002 20:56:03.530330  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.530337  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:03.530345  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:03.530358  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:03.596281  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:03.596307  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:03.611117  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:03.611135  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:03.669231  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:03.661298    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.661803    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.663484    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.664056    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.665688    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:03.661298    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.661803    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.663484    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.664056    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.665688    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:03.669243  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:03.669254  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:03.735723  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:03.735761  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:06.266853  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:06.278118  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:06.278167  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:06.304229  109844 cri.go:89] found id: ""
	I1002 20:56:06.304246  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.304252  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:06.304258  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:06.304314  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:06.331492  109844 cri.go:89] found id: ""
	I1002 20:56:06.331510  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.331517  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:06.331522  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:06.331574  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:06.357300  109844 cri.go:89] found id: ""
	I1002 20:56:06.357319  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.357328  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:06.357333  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:06.357381  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:06.385072  109844 cri.go:89] found id: ""
	I1002 20:56:06.385092  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.385101  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:06.385107  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:06.385170  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:06.412479  109844 cri.go:89] found id: ""
	I1002 20:56:06.412499  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.412509  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:06.412516  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:06.412571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:06.439019  109844 cri.go:89] found id: ""
	I1002 20:56:06.439035  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.439042  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:06.439049  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:06.439105  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:06.466228  109844 cri.go:89] found id: ""
	I1002 20:56:06.466244  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.466250  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:06.466257  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:06.466268  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:06.530972  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:06.530997  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:06.546016  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:06.546039  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:06.604192  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:06.597141    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.597599    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.599321    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.600026    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.601244    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:06.597141    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.597599    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.599321    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.600026    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.601244    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:06.604215  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:06.604226  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:06.668313  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:06.668341  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:09.199470  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:09.210902  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:09.210947  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:09.237464  109844 cri.go:89] found id: ""
	I1002 20:56:09.237481  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.237488  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:09.237503  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:09.237549  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:09.264849  109844 cri.go:89] found id: ""
	I1002 20:56:09.264868  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.264876  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:09.264884  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:09.264944  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:09.291066  109844 cri.go:89] found id: ""
	I1002 20:56:09.291083  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.291088  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:09.291094  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:09.291141  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:09.316972  109844 cri.go:89] found id: ""
	I1002 20:56:09.316991  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.317001  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:09.317008  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:09.317066  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:09.342462  109844 cri.go:89] found id: ""
	I1002 20:56:09.342479  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.342488  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:09.342494  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:09.342560  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:09.369344  109844 cri.go:89] found id: ""
	I1002 20:56:09.369361  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.369370  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:09.369377  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:09.369431  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:09.396279  109844 cri.go:89] found id: ""
	I1002 20:56:09.396295  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.396301  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:09.396309  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:09.396325  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:09.462471  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:09.462495  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:09.477360  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:09.477379  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:09.533977  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:09.526956    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.527598    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529217    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529656    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.531136    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:09.526956    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.527598    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529217    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529656    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.531136    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:09.533991  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:09.534001  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:09.597829  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:09.597856  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:12.129375  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:12.140711  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:12.140778  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:12.167268  109844 cri.go:89] found id: ""
	I1002 20:56:12.167287  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.167295  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:12.167301  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:12.167351  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:12.193605  109844 cri.go:89] found id: ""
	I1002 20:56:12.193620  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.193625  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:12.193630  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:12.193674  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:12.220258  109844 cri.go:89] found id: ""
	I1002 20:56:12.220272  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.220279  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:12.220284  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:12.220342  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:12.246824  109844 cri.go:89] found id: ""
	I1002 20:56:12.246839  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.246845  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:12.246849  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:12.246897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:12.273611  109844 cri.go:89] found id: ""
	I1002 20:56:12.273631  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.273639  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:12.273647  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:12.273708  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:12.300838  109844 cri.go:89] found id: ""
	I1002 20:56:12.300856  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.300862  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:12.300868  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:12.300916  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:12.328414  109844 cri.go:89] found id: ""
	I1002 20:56:12.328429  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.328435  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:12.328442  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:12.328453  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:12.397603  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:12.397628  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:12.412076  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:12.412093  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:12.469369  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:12.462192    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.462709    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464313    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464791    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.466331    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:12.462192    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.462709    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464313    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464791    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.466331    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:12.469384  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:12.469399  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:12.530104  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:12.530130  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:15.060450  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:15.071089  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:15.071138  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:15.097730  109844 cri.go:89] found id: ""
	I1002 20:56:15.097766  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.097774  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:15.097783  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:15.097832  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:15.123349  109844 cri.go:89] found id: ""
	I1002 20:56:15.123366  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.123376  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:15.123382  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:15.123445  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:15.149644  109844 cri.go:89] found id: ""
	I1002 20:56:15.149659  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.149665  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:15.149670  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:15.149717  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:15.175442  109844 cri.go:89] found id: ""
	I1002 20:56:15.175464  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.175473  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:15.175480  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:15.175534  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:15.200859  109844 cri.go:89] found id: ""
	I1002 20:56:15.200875  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.200881  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:15.200886  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:15.200931  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:15.226770  109844 cri.go:89] found id: ""
	I1002 20:56:15.226786  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.226792  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:15.226797  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:15.226857  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:15.252444  109844 cri.go:89] found id: ""
	I1002 20:56:15.252462  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.252472  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:15.252480  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:15.252493  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:15.281148  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:15.281166  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:15.350382  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:15.350406  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:15.365144  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:15.365163  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:15.421764  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:15.414607    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.415162    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.416781    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.417290    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.418840    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:15.414607    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.415162    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.416781    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.417290    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.418840    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:15.421789  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:15.421802  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:17.982382  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:17.992951  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:17.992999  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:18.018834  109844 cri.go:89] found id: ""
	I1002 20:56:18.018853  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.018862  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:18.018869  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:18.018923  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:18.045169  109844 cri.go:89] found id: ""
	I1002 20:56:18.045186  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.045192  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:18.045196  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:18.045245  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:18.071187  109844 cri.go:89] found id: ""
	I1002 20:56:18.071202  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.071209  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:18.071213  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:18.071263  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:18.099002  109844 cri.go:89] found id: ""
	I1002 20:56:18.099021  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.099031  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:18.099037  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:18.099086  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:18.124458  109844 cri.go:89] found id: ""
	I1002 20:56:18.124474  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.124481  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:18.124486  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:18.124532  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:18.151052  109844 cri.go:89] found id: ""
	I1002 20:56:18.151070  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.151078  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:18.151086  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:18.151147  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:18.177070  109844 cri.go:89] found id: ""
	I1002 20:56:18.177088  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.177097  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:18.177106  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:18.177120  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:18.245531  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:18.245551  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:18.259536  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:18.259555  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:18.315828  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:18.309110    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.309608    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311154    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311572    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.313080    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:18.309110    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.309608    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311154    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311572    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.313080    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:18.315838  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:18.315849  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:18.378894  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:18.378917  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:20.910289  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:20.921508  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:20.921565  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:20.949001  109844 cri.go:89] found id: ""
	I1002 20:56:20.949015  109844 logs.go:282] 0 containers: []
	W1002 20:56:20.949022  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:20.949027  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:20.949073  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:20.975236  109844 cri.go:89] found id: ""
	I1002 20:56:20.975253  109844 logs.go:282] 0 containers: []
	W1002 20:56:20.975259  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:20.975264  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:20.975310  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:21.002161  109844 cri.go:89] found id: ""
	I1002 20:56:21.002176  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.002183  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:21.002188  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:21.002236  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:21.029183  109844 cri.go:89] found id: ""
	I1002 20:56:21.029203  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.029211  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:21.029218  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:21.029291  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:21.056171  109844 cri.go:89] found id: ""
	I1002 20:56:21.056187  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.056193  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:21.056198  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:21.056248  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:21.083782  109844 cri.go:89] found id: ""
	I1002 20:56:21.083801  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.083810  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:21.083817  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:21.083873  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:21.110480  109844 cri.go:89] found id: ""
	I1002 20:56:21.110496  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.110503  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:21.110512  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:21.110526  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:21.178200  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:21.178224  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:21.192348  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:21.192367  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:21.248832  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:21.241470    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.242149    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.243832    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.244309    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.245873    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:21.241470    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.242149    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.243832    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.244309    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.245873    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:21.248843  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:21.248866  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:21.313859  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:21.313939  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:23.844485  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:23.855704  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:23.855785  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:23.881987  109844 cri.go:89] found id: ""
	I1002 20:56:23.882003  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.882009  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:23.882014  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:23.882058  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:23.908092  109844 cri.go:89] found id: ""
	I1002 20:56:23.908109  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.908115  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:23.908121  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:23.908175  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:23.933489  109844 cri.go:89] found id: ""
	I1002 20:56:23.933503  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.933509  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:23.933514  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:23.933560  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:23.958962  109844 cri.go:89] found id: ""
	I1002 20:56:23.958978  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.958985  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:23.958991  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:23.959039  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:23.985206  109844 cri.go:89] found id: ""
	I1002 20:56:23.985222  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.985231  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:23.985237  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:23.985298  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:24.011436  109844 cri.go:89] found id: ""
	I1002 20:56:24.011453  109844 logs.go:282] 0 containers: []
	W1002 20:56:24.011460  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:24.011465  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:24.011512  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:24.036401  109844 cri.go:89] found id: ""
	I1002 20:56:24.036417  109844 logs.go:282] 0 containers: []
	W1002 20:56:24.036423  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:24.036431  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:24.036447  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:24.050446  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:24.050465  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:24.105883  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:24.099062    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.099587    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101050    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101530    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.103091    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:24.099062    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.099587    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101050    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101530    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.103091    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:24.105896  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:24.105906  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:24.165660  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:24.165683  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:24.194659  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:24.194677  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:26.765857  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:26.776723  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:26.776795  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:26.803878  109844 cri.go:89] found id: ""
	I1002 20:56:26.803894  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.803901  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:26.803906  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:26.803960  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:26.828926  109844 cri.go:89] found id: ""
	I1002 20:56:26.828944  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.828950  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:26.828955  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:26.829002  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:26.854812  109844 cri.go:89] found id: ""
	I1002 20:56:26.854828  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.854834  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:26.854840  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:26.854887  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:26.881665  109844 cri.go:89] found id: ""
	I1002 20:56:26.881682  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.881688  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:26.881693  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:26.881763  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:26.909265  109844 cri.go:89] found id: ""
	I1002 20:56:26.909284  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.909294  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:26.909301  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:26.909355  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:26.935117  109844 cri.go:89] found id: ""
	I1002 20:56:26.935133  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.935139  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:26.935144  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:26.935200  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:26.961377  109844 cri.go:89] found id: ""
	I1002 20:56:26.961392  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.961399  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:26.961406  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:26.961417  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:26.989187  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:26.989204  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:27.056354  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:27.056379  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:27.070926  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:27.070944  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:27.127442  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:27.119650    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.120189    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.122490    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.123013    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.124580    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:27.119650    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.120189    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.122490    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.123013    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.124580    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:27.127456  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:27.127473  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:29.687547  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:29.698733  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:29.698810  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:29.724706  109844 cri.go:89] found id: ""
	I1002 20:56:29.724721  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.724727  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:29.724732  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:29.724794  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:29.752274  109844 cri.go:89] found id: ""
	I1002 20:56:29.752291  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.752297  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:29.752308  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:29.752369  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:29.778792  109844 cri.go:89] found id: ""
	I1002 20:56:29.778807  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.778813  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:29.778818  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:29.778867  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:29.804447  109844 cri.go:89] found id: ""
	I1002 20:56:29.804468  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.804485  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:29.804490  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:29.804540  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:29.830280  109844 cri.go:89] found id: ""
	I1002 20:56:29.830301  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.830310  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:29.830316  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:29.830375  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:29.855193  109844 cri.go:89] found id: ""
	I1002 20:56:29.855209  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.855215  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:29.855220  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:29.855270  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:29.881092  109844 cri.go:89] found id: ""
	I1002 20:56:29.881107  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.881114  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:29.881122  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:29.881132  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:29.948531  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:29.948565  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:29.962996  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:29.963015  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:30.019733  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:30.012437    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.013106    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.014710    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.015163    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.016849    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:30.012437    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.013106    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.014710    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.015163    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.016849    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:30.019769  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:30.019784  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:30.080302  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:30.080332  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:32.612620  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:32.623619  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:32.623669  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:32.649868  109844 cri.go:89] found id: ""
	I1002 20:56:32.649884  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.649890  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:32.649895  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:32.649947  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:32.676993  109844 cri.go:89] found id: ""
	I1002 20:56:32.677011  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.677020  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:32.677026  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:32.677084  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:32.703005  109844 cri.go:89] found id: ""
	I1002 20:56:32.703026  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.703036  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:32.703042  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:32.703105  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:32.728641  109844 cri.go:89] found id: ""
	I1002 20:56:32.728657  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.728663  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:32.728668  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:32.728716  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:32.754904  109844 cri.go:89] found id: ""
	I1002 20:56:32.754922  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.754931  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:32.754938  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:32.754996  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:32.780607  109844 cri.go:89] found id: ""
	I1002 20:56:32.780623  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.780632  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:32.780638  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:32.780700  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:32.805534  109844 cri.go:89] found id: ""
	I1002 20:56:32.805549  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.805555  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:32.805564  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:32.805575  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:32.871168  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:32.871190  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:32.885484  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:32.885503  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:32.942338  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:32.935227    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.935814    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937470    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937975    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.939512    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:32.935227    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.935814    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937470    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937975    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.939512    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:32.942348  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:32.942361  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:33.006822  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:33.006849  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:35.539700  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:35.550793  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:35.550843  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:35.577123  109844 cri.go:89] found id: ""
	I1002 20:56:35.577141  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.577152  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:35.577158  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:35.577205  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:35.603414  109844 cri.go:89] found id: ""
	I1002 20:56:35.603429  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.603435  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:35.603440  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:35.603487  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:35.630119  109844 cri.go:89] found id: ""
	I1002 20:56:35.630139  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.630151  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:35.630161  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:35.630216  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:35.656385  109844 cri.go:89] found id: ""
	I1002 20:56:35.656400  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.656406  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:35.656410  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:35.656461  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:35.683092  109844 cri.go:89] found id: ""
	I1002 20:56:35.683109  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.683117  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:35.683121  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:35.683168  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:35.709629  109844 cri.go:89] found id: ""
	I1002 20:56:35.709644  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.709651  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:35.709657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:35.709713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:35.737006  109844 cri.go:89] found id: ""
	I1002 20:56:35.737025  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.737035  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:35.737043  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:35.737054  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:35.767533  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:35.767556  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:35.833953  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:35.833980  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:35.848818  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:35.848839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:35.906998  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:35.899806    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.900358    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.901937    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.902434    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.903965    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:35.899806    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.900358    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.901937    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.902434    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.903965    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:35.907011  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:35.907024  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:38.471319  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:38.481958  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:38.482010  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:38.507711  109844 cri.go:89] found id: ""
	I1002 20:56:38.507730  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.507751  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:38.507758  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:38.507820  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:38.534015  109844 cri.go:89] found id: ""
	I1002 20:56:38.534033  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.534039  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:38.534045  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:38.534096  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:38.561341  109844 cri.go:89] found id: ""
	I1002 20:56:38.561358  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.561367  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:38.561373  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:38.561433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:38.587872  109844 cri.go:89] found id: ""
	I1002 20:56:38.587891  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.587901  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:38.587907  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:38.587973  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:38.612399  109844 cri.go:89] found id: ""
	I1002 20:56:38.612418  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.612427  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:38.612433  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:38.612480  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:38.639104  109844 cri.go:89] found id: ""
	I1002 20:56:38.639120  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.639127  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:38.639132  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:38.639190  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:38.667322  109844 cri.go:89] found id: ""
	I1002 20:56:38.667339  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.667345  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:38.667352  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:38.667363  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:38.682168  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:38.682187  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:38.740651  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:38.733357    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.733969    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.735590    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.736050    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.737649    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:38.733357    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.733969    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.735590    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.736050    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.737649    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:38.740663  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:38.740674  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:38.805774  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:38.805798  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:38.835944  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:38.835962  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:41.406460  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:41.417553  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:41.417620  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:41.444684  109844 cri.go:89] found id: ""
	I1002 20:56:41.444698  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.444705  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:41.444710  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:41.444781  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:41.471352  109844 cri.go:89] found id: ""
	I1002 20:56:41.471370  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.471382  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:41.471390  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:41.471442  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:41.498686  109844 cri.go:89] found id: ""
	I1002 20:56:41.498702  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.498709  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:41.498714  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:41.498785  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:41.524449  109844 cri.go:89] found id: ""
	I1002 20:56:41.524463  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.524469  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:41.524478  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:41.524531  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:41.551827  109844 cri.go:89] found id: ""
	I1002 20:56:41.551845  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.551857  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:41.551864  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:41.551913  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:41.577898  109844 cri.go:89] found id: ""
	I1002 20:56:41.577918  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.577927  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:41.577933  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:41.577989  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:41.604237  109844 cri.go:89] found id: ""
	I1002 20:56:41.604254  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.604261  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:41.604270  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:41.604290  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:41.675907  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:41.675931  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:41.690491  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:41.690509  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:41.749157  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:41.742425    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.742947    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.744615    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.745122    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.746195    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:41.742425    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.742947    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.744615    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.745122    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.746195    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:41.749169  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:41.749184  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:41.815715  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:41.815751  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:44.347532  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:44.358694  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:44.358755  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:44.385917  109844 cri.go:89] found id: ""
	I1002 20:56:44.385932  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.385941  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:44.385946  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:44.385992  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:44.412267  109844 cri.go:89] found id: ""
	I1002 20:56:44.412283  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.412289  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:44.412293  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:44.412344  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:44.439227  109844 cri.go:89] found id: ""
	I1002 20:56:44.439242  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.439249  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:44.439253  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:44.439298  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:44.465395  109844 cri.go:89] found id: ""
	I1002 20:56:44.465411  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.465418  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:44.465423  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:44.465473  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:44.491435  109844 cri.go:89] found id: ""
	I1002 20:56:44.491452  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.491457  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:44.491462  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:44.491508  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:44.517875  109844 cri.go:89] found id: ""
	I1002 20:56:44.517892  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.517899  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:44.517904  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:44.517956  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:44.544412  109844 cri.go:89] found id: ""
	I1002 20:56:44.544428  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.544435  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:44.544443  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:44.544454  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:44.558619  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:44.558637  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:44.615090  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:44.608024    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.608566    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610178    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610634    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.612155    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:44.608024    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.608566    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610178    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610634    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.612155    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:44.615103  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:44.615115  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:44.675486  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:44.675509  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:44.704835  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:44.704853  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:47.280286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:47.291478  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:47.291529  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:47.318560  109844 cri.go:89] found id: ""
	I1002 20:56:47.318581  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.318586  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:47.318594  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:47.318648  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:47.344455  109844 cri.go:89] found id: ""
	I1002 20:56:47.344471  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.344477  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:47.344482  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:47.344527  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:47.370437  109844 cri.go:89] found id: ""
	I1002 20:56:47.370452  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.370458  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:47.370464  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:47.370532  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:47.396657  109844 cri.go:89] found id: ""
	I1002 20:56:47.396672  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.396678  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:47.396682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:47.396751  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:47.422143  109844 cri.go:89] found id: ""
	I1002 20:56:47.422166  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.422172  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:47.422178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:47.422230  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:47.447815  109844 cri.go:89] found id: ""
	I1002 20:56:47.447835  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.447844  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:47.447851  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:47.447910  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:47.473476  109844 cri.go:89] found id: ""
	I1002 20:56:47.473491  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.473498  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:47.473514  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:47.473528  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:47.487700  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:47.487722  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:47.544344  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:47.537160    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.537816    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539394    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539878    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.541420    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:47.537160    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.537816    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539394    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539878    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.541420    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:47.544360  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:47.544370  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:47.605987  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:47.606010  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:47.634796  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:47.634815  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:50.205345  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:50.216795  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:50.216856  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:50.242490  109844 cri.go:89] found id: ""
	I1002 20:56:50.242507  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.242516  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:50.242523  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:50.242599  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:50.269384  109844 cri.go:89] found id: ""
	I1002 20:56:50.269399  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.269405  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:50.269410  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:50.269455  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:50.294810  109844 cri.go:89] found id: ""
	I1002 20:56:50.294830  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.294839  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:50.294847  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:50.294900  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:50.321301  109844 cri.go:89] found id: ""
	I1002 20:56:50.321330  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.321339  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:50.321345  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:50.321396  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:50.348435  109844 cri.go:89] found id: ""
	I1002 20:56:50.348454  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.348463  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:50.348470  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:50.348521  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:50.375520  109844 cri.go:89] found id: ""
	I1002 20:56:50.375537  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.375544  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:50.375550  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:50.375612  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:50.401919  109844 cri.go:89] found id: ""
	I1002 20:56:50.401935  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.401941  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:50.401949  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:50.401960  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:50.474853  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:50.474878  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:50.489483  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:50.489502  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:50.546358  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:50.539620    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.540253    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.541729    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.542224    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.543673    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:50.539620    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.540253    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.541729    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.542224    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.543673    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:50.546371  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:50.546387  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:50.612342  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:50.612365  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:53.143229  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:53.154347  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:53.154399  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:53.179697  109844 cri.go:89] found id: ""
	I1002 20:56:53.179714  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.179722  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:53.179727  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:53.179796  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:53.206078  109844 cri.go:89] found id: ""
	I1002 20:56:53.206094  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.206102  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:53.206107  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:53.206161  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:53.232905  109844 cri.go:89] found id: ""
	I1002 20:56:53.232920  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.232929  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:53.232935  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:53.232990  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:53.258881  109844 cri.go:89] found id: ""
	I1002 20:56:53.258897  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.258903  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:53.258908  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:53.259002  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:53.286005  109844 cri.go:89] found id: ""
	I1002 20:56:53.286020  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.286026  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:53.286031  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:53.286077  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:53.311544  109844 cri.go:89] found id: ""
	I1002 20:56:53.311562  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.311572  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:53.311579  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:53.311642  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:53.338344  109844 cri.go:89] found id: ""
	I1002 20:56:53.338360  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.338366  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:53.338375  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:53.338391  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:53.394654  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:53.387661    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.388633    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.389809    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.390172    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.391803    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:53.387661    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.388633    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.389809    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.390172    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.391803    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:53.394666  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:53.394676  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:53.457101  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:53.457125  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:53.487445  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:53.487464  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:53.560767  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:53.560788  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:56.077698  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:56.088607  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:56.088653  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:56.115831  109844 cri.go:89] found id: ""
	I1002 20:56:56.115851  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.115860  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:56.115873  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:56.115930  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:56.143933  109844 cri.go:89] found id: ""
	I1002 20:56:56.143951  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.143960  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:56.143966  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:56.144013  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:56.170959  109844 cri.go:89] found id: ""
	I1002 20:56:56.170976  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.170983  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:56.170987  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:56.171041  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:56.198476  109844 cri.go:89] found id: ""
	I1002 20:56:56.198493  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.198502  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:56.198507  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:56.198553  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:56.225118  109844 cri.go:89] found id: ""
	I1002 20:56:56.225136  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.225144  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:56.225151  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:56.225203  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:56.250695  109844 cri.go:89] found id: ""
	I1002 20:56:56.250712  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.250719  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:56.250724  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:56.250798  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:56.277912  109844 cri.go:89] found id: ""
	I1002 20:56:56.277927  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.277933  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:56.277939  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:56.277949  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:56.348703  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:56.348726  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:56.363669  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:56.363691  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:56.421487  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:56.414561    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.415193    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.416833    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.417344    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.418421    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:56.414561    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.415193    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.416833    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.417344    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.418421    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:56.421501  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:56.421512  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:56.486234  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:56.486258  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:59.016061  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:59.027120  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:59.027174  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:59.055077  109844 cri.go:89] found id: ""
	I1002 20:56:59.055094  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.055100  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:59.055105  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:59.055154  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:59.080243  109844 cri.go:89] found id: ""
	I1002 20:56:59.080260  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.080267  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:59.080272  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:59.080321  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:59.105555  109844 cri.go:89] found id: ""
	I1002 20:56:59.105573  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.105582  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:59.105588  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:59.105643  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:59.131895  109844 cri.go:89] found id: ""
	I1002 20:56:59.131911  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.131918  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:59.131923  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:59.131971  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:59.158699  109844 cri.go:89] found id: ""
	I1002 20:56:59.158716  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.158724  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:59.158731  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:59.158813  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:59.184528  109844 cri.go:89] found id: ""
	I1002 20:56:59.184547  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.184553  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:59.184558  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:59.184621  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:59.210382  109844 cri.go:89] found id: ""
	I1002 20:56:59.210398  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.210406  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:59.210415  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:59.210435  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:59.274026  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:59.274049  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:59.303182  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:59.303199  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:59.372421  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:59.372446  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:59.388344  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:59.388367  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:59.449053  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:59.441943    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.442636    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.443715    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.444268    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.445829    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:59.441943    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.442636    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.443715    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.444268    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.445829    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:01.950787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:01.962421  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:01.962505  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:01.990756  109844 cri.go:89] found id: ""
	I1002 20:57:01.990774  109844 logs.go:282] 0 containers: []
	W1002 20:57:01.990781  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:01.990786  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:01.990835  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:02.018452  109844 cri.go:89] found id: ""
	I1002 20:57:02.018471  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.018480  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:02.018485  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:02.018568  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:02.046456  109844 cri.go:89] found id: ""
	I1002 20:57:02.046474  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.046481  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:02.046485  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:02.046549  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:02.074761  109844 cri.go:89] found id: ""
	I1002 20:57:02.074781  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.074794  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:02.074799  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:02.074859  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:02.102891  109844 cri.go:89] found id: ""
	I1002 20:57:02.102910  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.102919  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:02.102926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:02.102986  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:02.129478  109844 cri.go:89] found id: ""
	I1002 20:57:02.129496  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.129503  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:02.129509  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:02.129571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:02.157911  109844 cri.go:89] found id: ""
	I1002 20:57:02.157927  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.157934  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:02.157941  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:02.157954  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:02.216970  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:02.209199    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.209824    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211437    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211932    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.213815    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:02.209199    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.209824    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211437    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211932    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.213815    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:02.216979  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:02.216990  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:02.280811  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:02.280839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:02.310062  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:02.310084  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:02.379511  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:02.379536  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:04.894910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:04.906215  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:04.906297  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:04.934307  109844 cri.go:89] found id: ""
	I1002 20:57:04.934323  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.934330  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:04.934335  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:04.934388  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:04.961709  109844 cri.go:89] found id: ""
	I1002 20:57:04.961725  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.961731  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:04.961751  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:04.961803  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:04.988103  109844 cri.go:89] found id: ""
	I1002 20:57:04.988123  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.988134  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:04.988141  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:04.988204  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:05.015267  109844 cri.go:89] found id: ""
	I1002 20:57:05.015282  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.015293  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:05.015298  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:05.015347  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:05.042563  109844 cri.go:89] found id: ""
	I1002 20:57:05.042585  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.042592  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:05.042597  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:05.042648  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:05.070337  109844 cri.go:89] found id: ""
	I1002 20:57:05.070356  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.070365  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:05.070372  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:05.070426  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:05.096592  109844 cri.go:89] found id: ""
	I1002 20:57:05.096607  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.096613  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:05.096622  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:05.096635  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:05.169506  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:05.169529  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:05.184432  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:05.184452  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:05.241625  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:05.234636    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.235167    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.236774    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.237205    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.238801    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:05.234636    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.235167    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.236774    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.237205    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.238801    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:05.241643  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:05.241657  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:05.304652  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:05.304675  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:07.835766  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:07.847178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:07.847237  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:07.873351  109844 cri.go:89] found id: ""
	I1002 20:57:07.873370  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.873380  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:07.873387  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:07.873457  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:07.900684  109844 cri.go:89] found id: ""
	I1002 20:57:07.900700  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.900707  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:07.900713  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:07.900792  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:07.928661  109844 cri.go:89] found id: ""
	I1002 20:57:07.928677  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.928686  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:07.928692  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:07.928763  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:07.954556  109844 cri.go:89] found id: ""
	I1002 20:57:07.954573  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.954583  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:07.954589  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:07.954657  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:07.982818  109844 cri.go:89] found id: ""
	I1002 20:57:07.982833  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.982839  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:07.982845  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:07.982903  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:08.010107  109844 cri.go:89] found id: ""
	I1002 20:57:08.010123  109844 logs.go:282] 0 containers: []
	W1002 20:57:08.010129  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:08.010134  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:08.010183  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:08.037125  109844 cri.go:89] found id: ""
	I1002 20:57:08.037142  109844 logs.go:282] 0 containers: []
	W1002 20:57:08.037150  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:08.037157  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:08.037166  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:08.096417  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:08.096440  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:08.126218  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:08.126239  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:08.194545  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:08.194571  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:08.210281  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:08.210304  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:08.266772  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:08.260009   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.260455   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262035   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262436   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.264034   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:08.260009   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.260455   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262035   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262436   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.264034   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:10.768500  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:10.779701  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:10.779778  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:10.806553  109844 cri.go:89] found id: ""
	I1002 20:57:10.806570  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.806578  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:10.806583  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:10.806628  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:10.831907  109844 cri.go:89] found id: ""
	I1002 20:57:10.831921  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.831938  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:10.831942  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:10.831987  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:10.858755  109844 cri.go:89] found id: ""
	I1002 20:57:10.858773  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.858781  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:10.858786  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:10.858844  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:10.886428  109844 cri.go:89] found id: ""
	I1002 20:57:10.886451  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.886460  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:10.886467  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:10.886528  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:10.912297  109844 cri.go:89] found id: ""
	I1002 20:57:10.912336  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.912344  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:10.912351  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:10.912405  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:10.939017  109844 cri.go:89] found id: ""
	I1002 20:57:10.939037  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.939043  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:10.939050  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:10.939112  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:10.964149  109844 cri.go:89] found id: ""
	I1002 20:57:10.964166  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.964173  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:10.964181  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:10.964192  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:11.035194  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:11.035220  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:11.050083  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:11.050103  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:11.107489  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:11.100162   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.100777   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102350   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102866   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.104475   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:11.100162   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.100777   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102350   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102866   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.104475   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:11.107508  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:11.107525  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:11.168024  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:11.168048  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:13.699241  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:13.709921  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:13.709982  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:13.735975  109844 cri.go:89] found id: ""
	I1002 20:57:13.735994  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.736004  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:13.736010  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:13.736059  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:13.762999  109844 cri.go:89] found id: ""
	I1002 20:57:13.763017  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.763024  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:13.763029  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:13.763082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:13.790647  109844 cri.go:89] found id: ""
	I1002 20:57:13.790667  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.790676  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:13.790682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:13.790753  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:13.816587  109844 cri.go:89] found id: ""
	I1002 20:57:13.816607  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.816617  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:13.816623  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:13.816688  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:13.842814  109844 cri.go:89] found id: ""
	I1002 20:57:13.842829  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.842836  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:13.842841  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:13.842891  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:13.868241  109844 cri.go:89] found id: ""
	I1002 20:57:13.868260  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.868269  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:13.868275  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:13.868327  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:13.895111  109844 cri.go:89] found id: ""
	I1002 20:57:13.895128  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.895138  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:13.895147  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:13.895158  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:13.962125  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:13.962150  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:13.976779  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:13.976795  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:14.033771  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:14.027040   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.027554   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029207   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029659   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.031092   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:14.027040   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.027554   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029207   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029659   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.031092   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:14.033782  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:14.033792  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:14.097410  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:14.097434  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:16.629753  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:16.640873  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:16.640931  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:16.668538  109844 cri.go:89] found id: ""
	I1002 20:57:16.668557  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.668568  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:16.668574  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:16.668633  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:16.697564  109844 cri.go:89] found id: ""
	I1002 20:57:16.697595  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.697605  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:16.697612  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:16.697666  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:16.725228  109844 cri.go:89] found id: ""
	I1002 20:57:16.725242  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.725248  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:16.725253  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:16.725297  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:16.750995  109844 cri.go:89] found id: ""
	I1002 20:57:16.751010  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.751017  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:16.751022  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:16.751066  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:16.777779  109844 cri.go:89] found id: ""
	I1002 20:57:16.777796  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.777803  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:16.777809  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:16.777869  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:16.803504  109844 cri.go:89] found id: ""
	I1002 20:57:16.803521  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.803527  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:16.803532  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:16.803593  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:16.830272  109844 cri.go:89] found id: ""
	I1002 20:57:16.830287  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.830294  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:16.830302  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:16.830313  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:16.902383  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:16.902407  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:16.917396  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:16.917415  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:16.974693  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:16.966376   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.966932   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.968658   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.969953   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.970548   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:16.966376   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.966932   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.968658   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.969953   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.970548   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:16.974702  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:16.974713  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:17.035157  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:17.035179  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:19.566417  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:19.577676  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:19.577746  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:19.604005  109844 cri.go:89] found id: ""
	I1002 20:57:19.604021  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.604027  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:19.604032  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:19.604080  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:19.631397  109844 cri.go:89] found id: ""
	I1002 20:57:19.631415  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.631423  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:19.631433  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:19.631486  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:19.657474  109844 cri.go:89] found id: ""
	I1002 20:57:19.657491  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.657498  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:19.657502  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:19.657550  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:19.683165  109844 cri.go:89] found id: ""
	I1002 20:57:19.683183  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.683240  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:19.683248  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:19.683303  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:19.709607  109844 cri.go:89] found id: ""
	I1002 20:57:19.709623  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.709629  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:19.709634  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:19.709681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:19.736310  109844 cri.go:89] found id: ""
	I1002 20:57:19.736326  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.736333  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:19.736338  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:19.736388  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:19.763087  109844 cri.go:89] found id: ""
	I1002 20:57:19.763103  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.763109  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:19.763117  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:19.763130  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:19.777545  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:19.777563  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:19.835265  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:19.828219   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.828825   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830398   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830870   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.832345   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:19.828219   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.828825   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830398   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830870   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.832345   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:19.835276  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:19.835288  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:19.900559  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:19.900584  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:19.929602  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:19.929620  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:22.502944  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:22.514059  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:22.514108  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:22.540127  109844 cri.go:89] found id: ""
	I1002 20:57:22.540144  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.540152  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:22.540158  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:22.540229  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:22.566906  109844 cri.go:89] found id: ""
	I1002 20:57:22.566920  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.566929  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:22.566936  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:22.566988  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:22.593141  109844 cri.go:89] found id: ""
	I1002 20:57:22.593160  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.593170  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:22.593178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:22.593258  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:22.617379  109844 cri.go:89] found id: ""
	I1002 20:57:22.617395  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.617403  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:22.617408  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:22.617482  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:22.642997  109844 cri.go:89] found id: ""
	I1002 20:57:22.643015  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.643023  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:22.643030  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:22.643088  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:22.669891  109844 cri.go:89] found id: ""
	I1002 20:57:22.669910  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.669918  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:22.669925  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:22.669979  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:22.698537  109844 cri.go:89] found id: ""
	I1002 20:57:22.698553  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.698559  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:22.698571  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:22.698582  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:22.764795  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:22.764818  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:22.779339  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:22.779360  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:22.835541  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:22.828422   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.828970   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.830522   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.831086   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.832606   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:22.828422   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.828970   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.830522   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.831086   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.832606   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:22.835550  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:22.835561  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:22.893791  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:22.893816  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:25.423487  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:25.434946  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:25.435008  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:25.461262  109844 cri.go:89] found id: ""
	I1002 20:57:25.461278  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.461286  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:25.461293  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:25.461373  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:25.487938  109844 cri.go:89] found id: ""
	I1002 20:57:25.487954  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.487960  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:25.487965  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:25.488008  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:25.513819  109844 cri.go:89] found id: ""
	I1002 20:57:25.513833  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.513839  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:25.513844  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:25.513887  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:25.540047  109844 cri.go:89] found id: ""
	I1002 20:57:25.540064  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.540073  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:25.540080  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:25.540218  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:25.565240  109844 cri.go:89] found id: ""
	I1002 20:57:25.565256  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.565262  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:25.565267  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:25.565332  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:25.591199  109844 cri.go:89] found id: ""
	I1002 20:57:25.591214  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.591221  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:25.591226  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:25.591271  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:25.617021  109844 cri.go:89] found id: ""
	I1002 20:57:25.617040  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.617047  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:25.617055  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:25.617071  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:25.674861  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:25.668100   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.668693   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670241   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670676   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.672203   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:25.668100   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.668693   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670241   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670676   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.672203   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:25.674872  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:25.674887  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:25.735460  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:25.735487  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:25.765055  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:25.765071  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:25.833285  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:25.833307  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:28.348626  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:28.359370  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:28.359432  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:28.384665  109844 cri.go:89] found id: ""
	I1002 20:57:28.384681  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.384688  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:28.384692  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:28.384756  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:28.411127  109844 cri.go:89] found id: ""
	I1002 20:57:28.411142  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.411148  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:28.411153  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:28.411198  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:28.439419  109844 cri.go:89] found id: ""
	I1002 20:57:28.439433  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.439439  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:28.439444  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:28.439491  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:28.465419  109844 cri.go:89] found id: ""
	I1002 20:57:28.465434  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.465441  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:28.465446  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:28.465494  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:28.492080  109844 cri.go:89] found id: ""
	I1002 20:57:28.492098  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.492107  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:28.492114  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:28.492171  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:28.518199  109844 cri.go:89] found id: ""
	I1002 20:57:28.518215  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.518221  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:28.518226  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:28.518290  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:28.545226  109844 cri.go:89] found id: ""
	I1002 20:57:28.545241  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.545248  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:28.545255  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:28.545266  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:28.574035  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:28.574055  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:28.640805  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:28.640827  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:28.655177  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:28.655195  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:28.715784  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:28.707733   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.708329   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.710706   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.711235   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.712816   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:28.707733   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.708329   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.710706   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.711235   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.712816   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:28.715802  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:28.715813  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:31.282555  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:31.293415  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:31.293460  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:31.320069  109844 cri.go:89] found id: ""
	I1002 20:57:31.320084  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.320090  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:31.320096  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:31.320141  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:31.347288  109844 cri.go:89] found id: ""
	I1002 20:57:31.347308  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.347315  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:31.347319  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:31.347370  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:31.373910  109844 cri.go:89] found id: ""
	I1002 20:57:31.373926  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.373932  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:31.373936  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:31.373980  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:31.399488  109844 cri.go:89] found id: ""
	I1002 20:57:31.399504  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.399510  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:31.399515  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:31.399579  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:31.425794  109844 cri.go:89] found id: ""
	I1002 20:57:31.425809  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.425815  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:31.425824  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:31.425878  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:31.452232  109844 cri.go:89] found id: ""
	I1002 20:57:31.452247  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.452253  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:31.452258  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:31.452304  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:31.478189  109844 cri.go:89] found id: ""
	I1002 20:57:31.478208  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.478217  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:31.478226  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:31.478239  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:31.535213  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:31.527960   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.528553   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530059   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530507   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.532158   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:31.527960   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.528553   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530059   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530507   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.532158   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:31.535223  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:31.535235  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:31.596390  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:31.596416  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:31.625326  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:31.625347  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:31.695449  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:31.695470  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:34.210847  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:34.221612  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:34.221660  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:34.248100  109844 cri.go:89] found id: ""
	I1002 20:57:34.248118  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.248124  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:34.248129  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:34.248177  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:34.273928  109844 cri.go:89] found id: ""
	I1002 20:57:34.273947  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.273953  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:34.273958  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:34.274004  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:34.300659  109844 cri.go:89] found id: ""
	I1002 20:57:34.300677  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.300684  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:34.300688  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:34.300751  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:34.328932  109844 cri.go:89] found id: ""
	I1002 20:57:34.328950  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.328958  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:34.328964  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:34.329012  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:34.355289  109844 cri.go:89] found id: ""
	I1002 20:57:34.355305  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.355315  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:34.355320  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:34.355371  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:34.381635  109844 cri.go:89] found id: ""
	I1002 20:57:34.381651  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.381658  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:34.381664  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:34.381713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:34.406539  109844 cri.go:89] found id: ""
	I1002 20:57:34.406558  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.406567  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:34.406575  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:34.406586  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:34.476613  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:34.476637  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:34.491529  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:34.491545  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:34.548604  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:34.541411   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.541857   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543425   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543873   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.545469   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:34.541411   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.541857   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543425   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543873   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.545469   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:34.548616  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:34.548627  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:34.614034  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:34.614057  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:37.146000  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:37.156680  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:37.156731  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:37.183104  109844 cri.go:89] found id: ""
	I1002 20:57:37.183120  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.183126  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:37.183130  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:37.183180  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:37.209542  109844 cri.go:89] found id: ""
	I1002 20:57:37.209561  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.209570  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:37.209593  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:37.209651  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:37.236887  109844 cri.go:89] found id: ""
	I1002 20:57:37.236902  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.236907  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:37.236912  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:37.236955  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:37.263572  109844 cri.go:89] found id: ""
	I1002 20:57:37.263590  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.263600  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:37.263606  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:37.263670  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:37.290064  109844 cri.go:89] found id: ""
	I1002 20:57:37.290081  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.290088  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:37.290092  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:37.290140  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:37.315854  109844 cri.go:89] found id: ""
	I1002 20:57:37.315870  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.315877  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:37.315881  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:37.315928  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:37.341863  109844 cri.go:89] found id: ""
	I1002 20:57:37.341881  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.341888  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:37.341896  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:37.341906  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:37.370994  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:37.371009  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:37.436106  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:37.436137  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:37.451121  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:37.451149  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:37.506868  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:37.499823   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.500382   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.501949   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.502458   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.504014   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:37.499823   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.500382   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.501949   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.502458   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.504014   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:37.506882  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:37.506894  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:40.067997  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:40.078961  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:40.079015  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:40.104825  109844 cri.go:89] found id: ""
	I1002 20:57:40.104841  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.104848  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:40.104853  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:40.104901  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:40.131395  109844 cri.go:89] found id: ""
	I1002 20:57:40.131410  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.131417  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:40.131421  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:40.131472  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:40.156879  109844 cri.go:89] found id: ""
	I1002 20:57:40.156894  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.156900  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:40.156904  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:40.156950  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:40.184037  109844 cri.go:89] found id: ""
	I1002 20:57:40.184052  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.184058  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:40.184063  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:40.184109  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:40.209631  109844 cri.go:89] found id: ""
	I1002 20:57:40.209645  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.209652  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:40.209657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:40.209718  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:40.235959  109844 cri.go:89] found id: ""
	I1002 20:57:40.235974  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.235981  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:40.235985  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:40.236031  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:40.263268  109844 cri.go:89] found id: ""
	I1002 20:57:40.263295  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.263303  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:40.263312  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:40.263329  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:40.277655  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:40.277674  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:40.333759  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:40.326797   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.327375   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.328853   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.329279   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.330917   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:40.326797   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.327375   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.328853   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.329279   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.330917   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:40.333771  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:40.333782  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:40.398547  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:40.398573  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:40.429055  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:40.429075  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:43.000960  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:43.011533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:43.011594  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:43.038639  109844 cri.go:89] found id: ""
	I1002 20:57:43.038658  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.038664  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:43.038670  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:43.038718  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:43.064610  109844 cri.go:89] found id: ""
	I1002 20:57:43.064629  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.064638  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:43.064645  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:43.064692  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:43.092797  109844 cri.go:89] found id: ""
	I1002 20:57:43.092814  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.092829  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:43.092836  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:43.092905  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:43.117372  109844 cri.go:89] found id: ""
	I1002 20:57:43.117390  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.117398  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:43.117405  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:43.117455  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:43.143883  109844 cri.go:89] found id: ""
	I1002 20:57:43.143898  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.143903  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:43.143908  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:43.143954  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:43.168684  109844 cri.go:89] found id: ""
	I1002 20:57:43.168703  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.168711  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:43.168719  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:43.168794  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:43.194200  109844 cri.go:89] found id: ""
	I1002 20:57:43.194219  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.194226  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:43.194233  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:43.194243  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:43.224696  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:43.224716  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:43.292485  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:43.292511  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:43.307408  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:43.307426  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:43.365123  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:43.357900   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.358436   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360055   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360531   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.362200   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:43.357900   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.358436   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360055   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360531   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.362200   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:43.365138  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:43.365151  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:45.930176  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:45.940786  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:45.940834  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:45.966149  109844 cri.go:89] found id: ""
	I1002 20:57:45.966163  109844 logs.go:282] 0 containers: []
	W1002 20:57:45.966170  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:45.966174  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:45.966229  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:45.991076  109844 cri.go:89] found id: ""
	I1002 20:57:45.991091  109844 logs.go:282] 0 containers: []
	W1002 20:57:45.991098  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:45.991103  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:45.991160  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:46.016684  109844 cri.go:89] found id: ""
	I1002 20:57:46.016699  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.016707  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:46.016712  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:46.016783  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:46.044048  109844 cri.go:89] found id: ""
	I1002 20:57:46.044066  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.044075  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:46.044080  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:46.044126  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:46.072438  109844 cri.go:89] found id: ""
	I1002 20:57:46.072458  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.072463  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:46.072468  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:46.072513  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:46.098408  109844 cri.go:89] found id: ""
	I1002 20:57:46.098427  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.098435  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:46.098440  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:46.098494  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:46.125237  109844 cri.go:89] found id: ""
	I1002 20:57:46.125253  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.125260  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:46.125267  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:46.125279  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:46.181454  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:46.174705   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.175269   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.176884   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.177274   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.178794   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:46.174705   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.175269   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.176884   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.177274   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.178794   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:46.181465  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:46.181477  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:46.245377  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:46.245400  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:46.273829  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:46.273850  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:46.343515  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:46.343537  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:48.859249  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:48.870377  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:48.870433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:48.897669  109844 cri.go:89] found id: ""
	I1002 20:57:48.897687  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.897694  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:48.897699  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:48.897762  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:48.925008  109844 cri.go:89] found id: ""
	I1002 20:57:48.925023  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.925030  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:48.925036  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:48.925083  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:48.951643  109844 cri.go:89] found id: ""
	I1002 20:57:48.951657  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.951664  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:48.951668  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:48.951714  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:48.979002  109844 cri.go:89] found id: ""
	I1002 20:57:48.979020  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.979029  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:48.979036  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:48.979093  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:49.004625  109844 cri.go:89] found id: ""
	I1002 20:57:49.004641  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.004648  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:49.004652  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:49.004701  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:49.031772  109844 cri.go:89] found id: ""
	I1002 20:57:49.031788  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.031793  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:49.031805  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:49.031862  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:49.057980  109844 cri.go:89] found id: ""
	I1002 20:57:49.057996  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.058004  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:49.058013  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:49.058023  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:49.124248  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:49.124270  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:49.138512  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:49.138533  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:49.195138  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:49.187056   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.188681   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.189138   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.190686   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.191107   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:49.187056   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.188681   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.189138   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.190686   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.191107   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:49.195151  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:49.195173  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:49.258973  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:49.258997  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:51.791466  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:51.802977  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:51.803035  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:51.828498  109844 cri.go:89] found id: ""
	I1002 20:57:51.828514  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.828521  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:51.828526  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:51.828588  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:51.854342  109844 cri.go:89] found id: ""
	I1002 20:57:51.854360  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.854371  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:51.854378  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:51.854456  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:51.880507  109844 cri.go:89] found id: ""
	I1002 20:57:51.880524  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.880532  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:51.880537  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:51.880595  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:51.905868  109844 cri.go:89] found id: ""
	I1002 20:57:51.905885  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.905899  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:51.905906  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:51.905958  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:51.931501  109844 cri.go:89] found id: ""
	I1002 20:57:51.931520  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.931527  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:51.931533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:51.931584  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:51.959507  109844 cri.go:89] found id: ""
	I1002 20:57:51.959531  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.959537  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:51.959543  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:51.959597  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:51.986060  109844 cri.go:89] found id: ""
	I1002 20:57:51.986075  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.986082  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:51.986090  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:51.986102  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:52.001242  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:52.001265  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:52.058943  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:52.051510   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.052186   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.053757   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.054153   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.055841   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:52.051510   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.052186   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.053757   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.054153   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.055841   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:52.058955  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:52.058966  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:52.124165  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:52.124189  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:52.153884  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:52.153905  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:54.722906  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:54.734175  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:54.734232  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:54.759813  109844 cri.go:89] found id: ""
	I1002 20:57:54.759827  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.759834  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:54.759839  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:54.759886  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:54.786211  109844 cri.go:89] found id: ""
	I1002 20:57:54.786228  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.786234  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:54.786238  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:54.786296  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:54.812209  109844 cri.go:89] found id: ""
	I1002 20:57:54.812224  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.812231  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:54.812235  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:54.812279  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:54.838338  109844 cri.go:89] found id: ""
	I1002 20:57:54.838354  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.838359  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:54.838364  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:54.838409  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:54.864235  109844 cri.go:89] found id: ""
	I1002 20:57:54.864250  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.864257  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:54.864262  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:54.864313  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:54.889322  109844 cri.go:89] found id: ""
	I1002 20:57:54.889338  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.889345  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:54.889350  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:54.889408  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:54.914375  109844 cri.go:89] found id: ""
	I1002 20:57:54.914389  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.914396  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:54.914403  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:54.914413  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:54.982673  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:54.982695  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:54.997624  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:54.997643  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:55.054906  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:55.047912   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.048515   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050118   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050555   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.052232   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:55.047912   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.048515   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050118   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050555   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.052232   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:55.054918  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:55.054930  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:55.114767  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:55.114791  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:57.644999  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:57.656449  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:57.656504  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:57.681519  109844 cri.go:89] found id: ""
	I1002 20:57:57.681536  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.681547  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:57.681562  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:57.681613  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:57.707282  109844 cri.go:89] found id: ""
	I1002 20:57:57.707299  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.707306  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:57.707311  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:57.707368  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:57.733730  109844 cri.go:89] found id: ""
	I1002 20:57:57.733764  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.733773  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:57.733779  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:57.733829  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:57.759892  109844 cri.go:89] found id: ""
	I1002 20:57:57.759910  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.759919  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:57.759930  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:57.759977  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:57.786461  109844 cri.go:89] found id: ""
	I1002 20:57:57.786480  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.786488  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:57.786494  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:57.786554  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:57.811498  109844 cri.go:89] found id: ""
	I1002 20:57:57.811513  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.811520  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:57.811525  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:57.811584  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:57.838643  109844 cri.go:89] found id: ""
	I1002 20:57:57.838658  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.838664  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:57.838672  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:57.838683  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:57.903092  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:57.903112  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:57.917294  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:57.917313  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:57.973186  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:57.965977   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.966517   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968135   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968620   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.970155   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:57.965977   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.966517   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968135   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968620   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.970155   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:57.973196  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:57.973206  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:58.037591  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:58.037615  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:00.568697  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:00.579453  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:00.579509  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:00.605205  109844 cri.go:89] found id: ""
	I1002 20:58:00.605221  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.605228  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:00.605236  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:00.605281  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:00.630667  109844 cri.go:89] found id: ""
	I1002 20:58:00.630683  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.630690  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:00.630695  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:00.630779  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:00.656328  109844 cri.go:89] found id: ""
	I1002 20:58:00.656343  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.656349  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:00.656356  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:00.656404  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:00.687352  109844 cri.go:89] found id: ""
	I1002 20:58:00.687372  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.687380  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:00.687387  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:00.687450  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:00.715971  109844 cri.go:89] found id: ""
	I1002 20:58:00.715989  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.715996  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:00.716001  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:00.716051  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:00.743250  109844 cri.go:89] found id: ""
	I1002 20:58:00.743267  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.743274  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:00.743279  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:00.743337  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:00.768377  109844 cri.go:89] found id: ""
	I1002 20:58:00.768394  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.768402  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:00.768410  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:00.768421  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:00.836309  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:00.836330  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:00.851074  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:00.851091  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:00.909067  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:00.901998   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.902472   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904121   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904638   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.906303   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:00.901998   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.902472   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904121   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904638   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.906303   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:00.909078  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:00.909089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:00.967974  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:00.967996  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:03.498950  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:03.509660  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:03.509721  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:03.535662  109844 cri.go:89] found id: ""
	I1002 20:58:03.535677  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.535684  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:03.535689  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:03.535733  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:03.561250  109844 cri.go:89] found id: ""
	I1002 20:58:03.561265  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.561272  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:03.561277  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:03.561321  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:03.587048  109844 cri.go:89] found id: ""
	I1002 20:58:03.587067  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.587076  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:03.587083  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:03.587147  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:03.613674  109844 cri.go:89] found id: ""
	I1002 20:58:03.613690  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.613697  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:03.613702  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:03.613769  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:03.640328  109844 cri.go:89] found id: ""
	I1002 20:58:03.640347  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.640355  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:03.640361  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:03.640422  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:03.666291  109844 cri.go:89] found id: ""
	I1002 20:58:03.666312  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.666319  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:03.666331  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:03.666382  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:03.691967  109844 cri.go:89] found id: ""
	I1002 20:58:03.691985  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.691992  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:03.692006  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:03.692016  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:03.759409  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:03.759439  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:03.774258  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:03.774279  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:03.832338  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:03.825592   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.826120   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.827704   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.828142   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.829691   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:03.825592   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.826120   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.827704   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.828142   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.829691   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:03.832353  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:03.832368  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:03.893996  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:03.894020  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:06.425787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:06.436589  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:06.436637  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:06.462848  109844 cri.go:89] found id: ""
	I1002 20:58:06.462863  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.462870  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:06.462876  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:06.462923  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:06.488755  109844 cri.go:89] found id: ""
	I1002 20:58:06.488775  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.488784  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:06.488790  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:06.488840  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:06.514901  109844 cri.go:89] found id: ""
	I1002 20:58:06.514916  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.514922  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:06.514927  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:06.514970  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:06.541198  109844 cri.go:89] found id: ""
	I1002 20:58:06.541216  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.541222  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:06.541227  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:06.541274  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:06.566811  109844 cri.go:89] found id: ""
	I1002 20:58:06.566829  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.566835  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:06.566839  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:06.566889  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:06.592998  109844 cri.go:89] found id: ""
	I1002 20:58:06.593016  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.593025  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:06.593032  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:06.593082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:06.619126  109844 cri.go:89] found id: ""
	I1002 20:58:06.619142  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.619149  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:06.619156  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:06.619169  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:06.688927  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:06.688949  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:06.703470  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:06.703489  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:06.759531  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:06.752604   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.753172   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.754947   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.755395   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.756902   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:06.752604   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.753172   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.754947   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.755395   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.756902   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:06.759547  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:06.759558  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:06.821429  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:06.821453  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:09.350584  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:09.361407  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:09.361457  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:09.387670  109844 cri.go:89] found id: ""
	I1002 20:58:09.387686  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.387692  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:09.387697  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:09.387769  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:09.414282  109844 cri.go:89] found id: ""
	I1002 20:58:09.414297  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.414303  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:09.414308  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:09.414359  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:09.439986  109844 cri.go:89] found id: ""
	I1002 20:58:09.440004  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.440013  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:09.440021  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:09.440078  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:09.465260  109844 cri.go:89] found id: ""
	I1002 20:58:09.465274  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.465279  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:09.465284  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:09.465342  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:09.490459  109844 cri.go:89] found id: ""
	I1002 20:58:09.490475  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.490485  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:09.490492  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:09.490542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:09.517572  109844 cri.go:89] found id: ""
	I1002 20:58:09.517589  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.517597  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:09.517604  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:09.517657  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:09.543171  109844 cri.go:89] found id: ""
	I1002 20:58:09.543190  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.543200  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:09.543210  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:09.543224  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:09.610811  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:09.610836  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:09.625732  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:09.625765  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:09.684133  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:09.677059   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.677657   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679235   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679641   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.681326   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:09.677059   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.677657   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679235   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679641   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.681326   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:09.684159  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:09.684172  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:09.750121  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:09.750146  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:12.281914  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:12.292614  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:12.292681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:12.319213  109844 cri.go:89] found id: ""
	I1002 20:58:12.319229  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.319236  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:12.319241  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:12.319307  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:12.346475  109844 cri.go:89] found id: ""
	I1002 20:58:12.346491  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.346497  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:12.346506  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:12.346558  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:12.373396  109844 cri.go:89] found id: ""
	I1002 20:58:12.373412  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.373418  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:12.373422  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:12.373472  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:12.399960  109844 cri.go:89] found id: ""
	I1002 20:58:12.399975  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.399984  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:12.399990  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:12.400046  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:12.426115  109844 cri.go:89] found id: ""
	I1002 20:58:12.426134  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.426143  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:12.426148  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:12.426199  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:12.453989  109844 cri.go:89] found id: ""
	I1002 20:58:12.454005  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.454012  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:12.454017  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:12.454082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:12.480468  109844 cri.go:89] found id: ""
	I1002 20:58:12.480482  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.480489  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:12.480497  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:12.480506  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:12.546963  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:12.546987  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:12.561865  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:12.561884  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:12.618630  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:12.611604   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.612174   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.613811   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.614220   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.615797   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:12.611604   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.612174   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.613811   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.614220   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.615797   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:12.618644  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:12.618659  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:12.679779  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:12.679800  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:15.211438  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:15.222920  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:15.222984  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:15.249459  109844 cri.go:89] found id: ""
	I1002 20:58:15.249477  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.249486  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:15.249493  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:15.249563  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:15.275298  109844 cri.go:89] found id: ""
	I1002 20:58:15.275317  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.275324  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:15.275329  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:15.275376  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:15.301700  109844 cri.go:89] found id: ""
	I1002 20:58:15.301716  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.301722  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:15.301730  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:15.301798  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:15.329414  109844 cri.go:89] found id: ""
	I1002 20:58:15.329435  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.329442  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:15.329449  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:15.329509  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:15.355068  109844 cri.go:89] found id: ""
	I1002 20:58:15.355085  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.355093  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:15.355098  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:15.355148  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:15.380359  109844 cri.go:89] found id: ""
	I1002 20:58:15.380376  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.380383  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:15.380388  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:15.380447  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:15.407083  109844 cri.go:89] found id: ""
	I1002 20:58:15.407100  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.407107  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:15.407114  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:15.407125  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:15.475929  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:15.475952  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:15.490571  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:15.490597  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:15.548455  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:15.541509   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.542074   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.543830   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.544263   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.545369   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:15.541509   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.542074   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.543830   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.544263   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.545369   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:15.548470  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:15.548492  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:15.612985  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:15.613011  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:18.144173  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:18.154768  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:18.154839  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:18.181108  109844 cri.go:89] found id: ""
	I1002 20:58:18.181127  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.181135  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:18.181142  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:18.181211  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:18.207541  109844 cri.go:89] found id: ""
	I1002 20:58:18.207557  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.207564  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:18.207568  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:18.207617  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:18.234607  109844 cri.go:89] found id: ""
	I1002 20:58:18.234623  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.234630  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:18.234635  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:18.234682  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:18.262449  109844 cri.go:89] found id: ""
	I1002 20:58:18.262465  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.262471  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:18.262476  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:18.262525  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:18.288587  109844 cri.go:89] found id: ""
	I1002 20:58:18.288604  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.288611  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:18.288615  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:18.288671  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:18.315591  109844 cri.go:89] found id: ""
	I1002 20:58:18.315608  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.315616  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:18.315623  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:18.315686  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:18.341916  109844 cri.go:89] found id: ""
	I1002 20:58:18.341934  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.341943  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:18.341953  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:18.341967  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:18.409370  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:18.409397  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:18.423940  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:18.423957  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:18.481317  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:18.474299   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.474857   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476482   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476953   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.478581   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:18.474299   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.474857   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476482   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476953   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.478581   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:18.481328  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:18.481341  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:18.544851  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:18.544915  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:21.076714  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:21.087984  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:21.088035  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:21.114553  109844 cri.go:89] found id: ""
	I1002 20:58:21.114567  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.114574  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:21.114579  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:21.114627  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:21.140623  109844 cri.go:89] found id: ""
	I1002 20:58:21.140640  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.140647  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:21.140652  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:21.140709  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:21.167287  109844 cri.go:89] found id: ""
	I1002 20:58:21.167303  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.167310  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:21.167314  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:21.167366  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:21.192955  109844 cri.go:89] found id: ""
	I1002 20:58:21.192970  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.192976  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:21.192981  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:21.193026  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:21.218443  109844 cri.go:89] found id: ""
	I1002 20:58:21.218461  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.218470  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:21.218477  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:21.218543  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:21.245610  109844 cri.go:89] found id: ""
	I1002 20:58:21.245629  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.245636  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:21.245641  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:21.245705  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:21.274044  109844 cri.go:89] found id: ""
	I1002 20:58:21.274062  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.274071  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:21.274082  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:21.274094  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:21.344823  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:21.344846  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:21.359586  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:21.359607  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:21.415715  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:21.408650   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.409207   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.410856   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.411238   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.412941   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:21.408650   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.409207   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.410856   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.411238   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.412941   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:21.415727  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:21.415761  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:21.481719  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:21.481748  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:24.012099  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:24.023176  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:24.023230  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:24.048833  109844 cri.go:89] found id: ""
	I1002 20:58:24.048848  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.048854  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:24.048859  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:24.048910  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:24.075718  109844 cri.go:89] found id: ""
	I1002 20:58:24.075734  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.075760  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:24.075767  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:24.075820  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:24.102393  109844 cri.go:89] found id: ""
	I1002 20:58:24.102408  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.102415  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:24.102420  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:24.102470  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:24.128211  109844 cri.go:89] found id: ""
	I1002 20:58:24.128226  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.128233  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:24.128237  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:24.128295  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:24.154298  109844 cri.go:89] found id: ""
	I1002 20:58:24.154317  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.154337  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:24.154342  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:24.154400  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:24.180259  109844 cri.go:89] found id: ""
	I1002 20:58:24.180279  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.180289  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:24.180294  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:24.180343  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:24.206017  109844 cri.go:89] found id: ""
	I1002 20:58:24.206032  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.206038  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:24.206045  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:24.206057  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:24.262477  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:24.255581   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.256099   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.257667   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.258105   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.259636   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:24.255581   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.256099   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.257667   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.258105   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.259636   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:24.262487  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:24.262499  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:24.326558  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:24.326583  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:24.357911  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:24.357927  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:24.425144  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:24.425170  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:26.942340  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:26.953162  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:26.953210  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:26.977629  109844 cri.go:89] found id: ""
	I1002 20:58:26.977645  109844 logs.go:282] 0 containers: []
	W1002 20:58:26.977652  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:26.977656  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:26.977701  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:27.003794  109844 cri.go:89] found id: ""
	I1002 20:58:27.003810  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.003817  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:27.003821  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:27.003871  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:27.031644  109844 cri.go:89] found id: ""
	I1002 20:58:27.031662  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.031669  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:27.031673  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:27.031723  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:27.058490  109844 cri.go:89] found id: ""
	I1002 20:58:27.058522  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.058529  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:27.058533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:27.058580  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:27.083451  109844 cri.go:89] found id: ""
	I1002 20:58:27.083468  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.083475  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:27.083480  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:27.083536  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:27.108449  109844 cri.go:89] found id: ""
	I1002 20:58:27.108467  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.108475  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:27.108481  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:27.108542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:27.135415  109844 cri.go:89] found id: ""
	I1002 20:58:27.135433  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.135441  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:27.135451  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:27.135467  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:27.206016  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:27.206039  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:27.220873  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:27.220894  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:27.276309  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:27.269235   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.269791   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271364   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271799   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.273317   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:27.269235   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.269791   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271364   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271799   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.273317   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:27.276320  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:27.276335  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:27.341398  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:27.341421  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:29.872391  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:29.883459  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:29.883531  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:29.909713  109844 cri.go:89] found id: ""
	I1002 20:58:29.909729  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.909748  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:29.909755  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:29.909806  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:29.934338  109844 cri.go:89] found id: ""
	I1002 20:58:29.934354  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.934360  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:29.934365  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:29.934409  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:29.961900  109844 cri.go:89] found id: ""
	I1002 20:58:29.961917  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.961926  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:29.961932  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:29.961998  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:29.988238  109844 cri.go:89] found id: ""
	I1002 20:58:29.988253  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.988260  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:29.988265  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:29.988328  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:30.013598  109844 cri.go:89] found id: ""
	I1002 20:58:30.013613  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.013619  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:30.013624  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:30.013674  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:30.040799  109844 cri.go:89] found id: ""
	I1002 20:58:30.040817  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.040824  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:30.040829  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:30.040875  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:30.067159  109844 cri.go:89] found id: ""
	I1002 20:58:30.067174  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.067180  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:30.067187  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:30.067199  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:30.081264  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:30.081282  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:30.136411  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:30.129335   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.129861   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131445   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131865   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.133370   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:30.129335   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.129861   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131445   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131865   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.133370   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:30.136422  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:30.136436  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:30.198567  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:30.198599  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:30.226466  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:30.226488  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:32.794266  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:32.805593  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:32.805643  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:32.832000  109844 cri.go:89] found id: ""
	I1002 20:58:32.832015  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.832022  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:32.832027  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:32.832072  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:32.858662  109844 cri.go:89] found id: ""
	I1002 20:58:32.858680  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.858687  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:32.858691  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:32.858758  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:32.884652  109844 cri.go:89] found id: ""
	I1002 20:58:32.884671  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.884679  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:32.884686  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:32.884767  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:32.911548  109844 cri.go:89] found id: ""
	I1002 20:58:32.911571  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.911578  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:32.911583  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:32.911631  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:32.939319  109844 cri.go:89] found id: ""
	I1002 20:58:32.939335  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.939343  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:32.939347  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:32.939396  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:32.965654  109844 cri.go:89] found id: ""
	I1002 20:58:32.965670  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.965677  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:32.965681  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:32.965750  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:32.991821  109844 cri.go:89] found id: ""
	I1002 20:58:32.991837  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.991843  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:32.991851  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:32.991861  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:33.059096  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:33.059118  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:33.074520  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:33.074536  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:33.130853  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:33.124022   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.124509   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126111   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126586   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.128121   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:33.124022   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.124509   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126111   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126586   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.128121   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:33.130867  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:33.130881  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:33.196122  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:33.196146  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:35.728638  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:35.739628  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:35.739676  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:35.764726  109844 cri.go:89] found id: ""
	I1002 20:58:35.764760  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.764771  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:35.764777  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:35.764823  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:35.791011  109844 cri.go:89] found id: ""
	I1002 20:58:35.791026  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.791032  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:35.791037  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:35.791082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:35.817209  109844 cri.go:89] found id: ""
	I1002 20:58:35.817225  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.817231  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:35.817236  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:35.817281  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:35.842125  109844 cri.go:89] found id: ""
	I1002 20:58:35.842139  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.842145  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:35.842154  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:35.842200  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:35.867608  109844 cri.go:89] found id: ""
	I1002 20:58:35.867625  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.867631  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:35.867636  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:35.867681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:35.893798  109844 cri.go:89] found id: ""
	I1002 20:58:35.893813  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.893819  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:35.893824  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:35.893881  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:35.920822  109844 cri.go:89] found id: ""
	I1002 20:58:35.920837  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.920843  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:35.920851  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:35.920862  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:35.982786  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:35.982809  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:36.012445  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:36.012461  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:36.079729  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:36.079764  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:36.094119  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:36.094139  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:36.149838  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:36.142929   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.143480   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145076   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145533   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.147087   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:36.142929   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.143480   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145076   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145533   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.147087   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:38.650569  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:38.661345  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:38.661406  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:38.687690  109844 cri.go:89] found id: ""
	I1002 20:58:38.687709  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.687719  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:38.687729  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:38.687800  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:38.712812  109844 cri.go:89] found id: ""
	I1002 20:58:38.712830  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.712840  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:38.712846  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:38.712897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:38.738922  109844 cri.go:89] found id: ""
	I1002 20:58:38.738938  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.738945  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:38.738951  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:38.739014  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:38.766166  109844 cri.go:89] found id: ""
	I1002 20:58:38.766184  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.766191  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:38.766201  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:38.766259  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:38.793662  109844 cri.go:89] found id: ""
	I1002 20:58:38.793679  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.793687  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:38.793692  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:38.793758  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:38.820204  109844 cri.go:89] found id: ""
	I1002 20:58:38.820225  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.820233  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:38.820242  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:38.820301  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:38.846100  109844 cri.go:89] found id: ""
	I1002 20:58:38.846116  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.846122  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:38.846130  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:38.846143  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:38.912234  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:38.912257  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:38.926642  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:38.926661  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:38.983128  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:38.975680   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.976323   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.977925   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.978355   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.979926   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:38.975680   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.976323   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.977925   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.978355   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.979926   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:38.983140  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:38.983151  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:39.042170  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:39.042192  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:41.573431  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:41.584132  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:41.584179  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:41.610465  109844 cri.go:89] found id: ""
	I1002 20:58:41.610490  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.610500  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:41.610507  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:41.610571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:41.636463  109844 cri.go:89] found id: ""
	I1002 20:58:41.636481  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.636488  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:41.636493  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:41.636544  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:41.663306  109844 cri.go:89] found id: ""
	I1002 20:58:41.663324  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.663334  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:41.663340  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:41.663389  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:41.689945  109844 cri.go:89] found id: ""
	I1002 20:58:41.689963  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.689970  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:41.689975  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:41.690030  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:41.716483  109844 cri.go:89] found id: ""
	I1002 20:58:41.716498  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.716511  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:41.716515  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:41.716563  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:41.741653  109844 cri.go:89] found id: ""
	I1002 20:58:41.741670  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.741677  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:41.741682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:41.741728  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:41.768401  109844 cri.go:89] found id: ""
	I1002 20:58:41.768418  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.768425  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:41.768433  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:41.768444  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:41.825098  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:41.818285   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.818820   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820386   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820857   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.822413   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:41.818285   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.818820   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820386   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820857   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.822413   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:41.825108  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:41.825120  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:41.885569  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:41.885592  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:41.914823  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:41.914840  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:41.982285  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:41.982309  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:44.498020  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:44.508926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:44.508975  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:44.534766  109844 cri.go:89] found id: ""
	I1002 20:58:44.534783  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.534791  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:44.534797  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:44.534849  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:44.561400  109844 cri.go:89] found id: ""
	I1002 20:58:44.561418  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.561425  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:44.561429  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:44.561481  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:44.587621  109844 cri.go:89] found id: ""
	I1002 20:58:44.587638  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.587644  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:44.587649  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:44.587696  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:44.612688  109844 cri.go:89] found id: ""
	I1002 20:58:44.612703  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.612709  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:44.612717  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:44.612784  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:44.639713  109844 cri.go:89] found id: ""
	I1002 20:58:44.639728  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.639755  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:44.639763  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:44.639821  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:44.666252  109844 cri.go:89] found id: ""
	I1002 20:58:44.666271  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.666278  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:44.666283  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:44.666330  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:44.692295  109844 cri.go:89] found id: ""
	I1002 20:58:44.692311  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.692318  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:44.692326  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:44.692336  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:44.763438  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:44.763462  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:44.777919  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:44.777938  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:44.833114  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:44.826286   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.826821   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828377   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828833   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.830344   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:44.826286   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.826821   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828377   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828833   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.830344   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:44.833126  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:44.833138  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:44.893410  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:44.893436  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:47.425929  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:47.437727  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:47.437800  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:47.465106  109844 cri.go:89] found id: ""
	I1002 20:58:47.465125  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.465135  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:47.465141  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:47.465202  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:47.492450  109844 cri.go:89] found id: ""
	I1002 20:58:47.492469  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.492477  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:47.492487  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:47.492548  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:47.518249  109844 cri.go:89] found id: ""
	I1002 20:58:47.518266  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.518273  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:47.518280  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:47.518329  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:47.546009  109844 cri.go:89] found id: ""
	I1002 20:58:47.546026  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.546035  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:47.546040  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:47.546095  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:47.571969  109844 cri.go:89] found id: ""
	I1002 20:58:47.571984  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.571991  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:47.571995  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:47.572044  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:47.598332  109844 cri.go:89] found id: ""
	I1002 20:58:47.598352  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.598362  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:47.598371  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:47.598433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:47.624909  109844 cri.go:89] found id: ""
	I1002 20:58:47.624923  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.624932  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:47.624942  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:47.624955  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:47.682066  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:47.675019   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.675538   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677178   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677660   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.679133   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:47.675019   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.675538   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677178   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677660   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.679133   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:47.682078  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:47.682089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:47.742340  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:47.742363  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:47.772411  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:47.772428  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:47.841816  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:47.841839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:50.357907  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:50.368776  109844 kubeadm.go:601] duration metric: took 4m2.902167912s to restartPrimaryControlPlane
	W1002 20:58:50.368863  109844 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 20:58:50.368929  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:58:50.818759  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:58:50.831475  109844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:58:50.839597  109844 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:58:50.839643  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:58:50.847290  109844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:58:50.847300  109844 kubeadm.go:157] found existing configuration files:
	
	I1002 20:58:50.847341  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:58:50.854889  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:58:50.854928  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:58:50.862239  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:58:50.869705  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:58:50.869763  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:58:50.877993  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:58:50.885836  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:58:50.885887  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:58:50.893993  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:58:50.902316  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:58:50.902371  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:58:50.910549  109844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:58:50.946945  109844 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:58:50.946991  109844 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:58:50.966485  109844 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:58:50.966578  109844 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:58:50.966620  109844 kubeadm.go:318] OS: Linux
	I1002 20:58:50.966677  109844 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:58:50.966753  109844 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:58:50.966809  109844 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:58:50.966867  109844 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:58:50.966933  109844 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:58:50.966988  109844 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:58:50.967043  109844 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:58:50.967090  109844 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:58:51.025471  109844 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:58:51.025621  109844 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:58:51.025764  109844 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:58:51.032580  109844 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:58:51.036477  109844 out.go:252]   - Generating certificates and keys ...
	I1002 20:58:51.036579  109844 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:58:51.036655  109844 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:58:51.036755  109844 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:58:51.036828  109844 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:58:51.036907  109844 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:58:51.036961  109844 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:58:51.037039  109844 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:58:51.037113  109844 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:58:51.037183  109844 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:58:51.037249  109844 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:58:51.037279  109844 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:58:51.037325  109844 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:58:51.187682  109844 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:58:51.260672  109844 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:58:51.923940  109844 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:58:51.962992  109844 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:58:52.022920  109844 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:58:52.023298  109844 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:58:52.025586  109844 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:58:52.027495  109844 out.go:252]   - Booting up control plane ...
	I1002 20:58:52.027608  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:58:52.027713  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:58:52.027804  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:58:52.042406  109844 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:58:52.042511  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:58:52.049022  109844 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:58:52.049337  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:58:52.049378  109844 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:58:52.155568  109844 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:58:52.155766  109844 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:58:53.156432  109844 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000945383s
	I1002 20:58:53.159662  109844 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:58:53.159797  109844 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:58:53.159937  109844 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:58:53.160043  109844 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:02:53.160214  109844 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000318497s
	I1002 21:02:53.160391  109844 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00035696s
	I1002 21:02:53.160519  109844 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000784779s
	I1002 21:02:53.160527  109844 kubeadm.go:318] 
	I1002 21:02:53.160620  109844 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:02:53.160688  109844 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:02:53.160785  109844 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:02:53.160862  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:02:53.160927  109844 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:02:53.161001  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:02:53.161004  109844 kubeadm.go:318] 
	I1002 21:02:53.164399  109844 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:02:53.164524  109844 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:02:53.165091  109844 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 21:02:53.165168  109844 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:02:53.165349  109844 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000945383s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000318497s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00035696s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000784779s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:02:53.165441  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:02:53.609874  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:02:53.623007  109844 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:02:53.623061  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:02:53.631223  109844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:02:53.631235  109844 kubeadm.go:157] found existing configuration files:
	
	I1002 21:02:53.631283  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 21:02:53.639093  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:02:53.639137  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:02:53.647228  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 21:02:53.655566  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:02:53.655610  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:02:53.663430  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 21:02:53.671338  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:02:53.671390  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:02:53.679032  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 21:02:53.686944  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:02:53.686993  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:02:53.694170  109844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:02:53.730792  109844 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:02:53.730837  109844 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:02:53.752207  109844 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:02:53.752260  109844 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:02:53.752295  109844 kubeadm.go:318] OS: Linux
	I1002 21:02:53.752337  109844 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:02:53.752403  109844 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:02:53.752440  109844 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:02:53.752485  109844 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:02:53.752585  109844 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:02:53.752641  109844 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:02:53.752685  109844 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:02:53.752720  109844 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:02:53.811160  109844 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:02:53.811301  109844 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:02:53.811426  109844 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:02:53.817686  109844 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:02:53.822264  109844 out.go:252]   - Generating certificates and keys ...
	I1002 21:02:53.822366  109844 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:02:53.822429  109844 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:02:53.822500  109844 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:02:53.822558  109844 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:02:53.822649  109844 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:02:53.822721  109844 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:02:53.822797  109844 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:02:53.822883  109844 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:02:53.822984  109844 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:02:53.823080  109844 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:02:53.823129  109844 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:02:53.823200  109844 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:02:54.089650  109844 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:02:54.165018  109844 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:02:54.351562  109844 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:02:54.606636  109844 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:02:54.799514  109844 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:02:54.799929  109844 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:02:54.802220  109844 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:02:54.804402  109844 out.go:252]   - Booting up control plane ...
	I1002 21:02:54.804516  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:02:54.804616  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:02:54.804724  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:02:54.818368  109844 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:02:54.818509  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:02:54.825531  109844 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:02:54.826683  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:02:54.826734  109844 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:02:54.927546  109844 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:02:54.927690  109844 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:02:55.429241  109844 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.893032ms
	I1002 21:02:55.432296  109844 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:02:55.432407  109844 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 21:02:55.432483  109844 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:02:55.432583  109844 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:06:55.432671  109844 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	I1002 21:06:55.432869  109844 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	I1002 21:06:55.432961  109844 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	I1002 21:06:55.432968  109844 kubeadm.go:318] 
	I1002 21:06:55.433037  109844 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:06:55.433100  109844 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:06:55.433168  109844 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:06:55.433259  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:06:55.433328  109844 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:06:55.433419  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:06:55.433434  109844 kubeadm.go:318] 
	I1002 21:06:55.436835  109844 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:06:55.436949  109844 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:06:55.437474  109844 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:06:55.437568  109844 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:06:55.437594  109844 kubeadm.go:402] duration metric: took 12m8.007755847s to StartCluster
	I1002 21:06:55.437641  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:06:55.437710  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:06:55.464382  109844 cri.go:89] found id: ""
	I1002 21:06:55.464398  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.464404  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:06:55.464409  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:06:55.464469  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:06:55.490606  109844 cri.go:89] found id: ""
	I1002 21:06:55.490623  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.490633  109844 logs.go:284] No container was found matching "etcd"
	I1002 21:06:55.490638  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:06:55.490702  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:06:55.516529  109844 cri.go:89] found id: ""
	I1002 21:06:55.516547  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.516560  109844 logs.go:284] No container was found matching "coredns"
	I1002 21:06:55.516565  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:06:55.516631  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:06:55.542896  109844 cri.go:89] found id: ""
	I1002 21:06:55.542913  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.542919  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:06:55.542926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:06:55.542976  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:06:55.570192  109844 cri.go:89] found id: ""
	I1002 21:06:55.570206  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.570212  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:06:55.570217  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:06:55.570263  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:06:55.596069  109844 cri.go:89] found id: ""
	I1002 21:06:55.596092  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.596102  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:06:55.596107  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:06:55.596157  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:06:55.621555  109844 cri.go:89] found id: ""
	I1002 21:06:55.621572  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.621579  109844 logs.go:284] No container was found matching "kindnet"
	I1002 21:06:55.621587  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 21:06:55.621600  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:06:55.635371  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:06:55.635389  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:06:55.691316  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:06:55.684497   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.685072   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.686619   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.687074   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.688662   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:06:55.684497   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.685072   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.686619   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.687074   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.688662   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:06:55.691337  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:06:55.691347  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:06:55.755862  109844 logs.go:123] Gathering logs for container status ...
	I1002 21:06:55.755886  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:06:55.784730  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 21:06:55.784767  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1002 21:06:55.854494  109844 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:06:55.854545  109844 out.go:285] * 
	W1002 21:06:55.854631  109844 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:06:55.854657  109844 out.go:285] * 
	W1002 21:06:55.856372  109844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:06:55.860308  109844 out.go:203] 
	W1002 21:06:55.861642  109844 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:06:55.861662  109844 out.go:285] * 
	I1002 21:06:55.863851  109844 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:06:50 functional-012915 crio[5820]: time="2025-10-02T21:06:50.23149511Z" level=info msg="createCtr: removing container a11ad10a6facd115efda51f95be01c7d4b18e85a7266a175f7ba04020606f46a" id=627fdba6-7b17-4f70-a363-cc117843eeba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:50 functional-012915 crio[5820]: time="2025-10-02T21:06:50.231548884Z" level=info msg="createCtr: deleting container a11ad10a6facd115efda51f95be01c7d4b18e85a7266a175f7ba04020606f46a from storage" id=627fdba6-7b17-4f70-a363-cc117843eeba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:50 functional-012915 crio[5820]: time="2025-10-02T21:06:50.233892054Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-012915_kube-system_7482f03c4ea15852236655655d7fae39_0" id=627fdba6-7b17-4f70-a363-cc117843eeba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.205556556Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=1de1a49a-6746-43c3-8fdb-9dadd10c7f27 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.206381729Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=6bcef6bf-e782-40ad-bfef-f18dddb9b25a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.20714502Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-012915/kube-scheduler" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.207343617Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.210669982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.211138693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.229548778Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.230898309Z" level=info msg="createCtr: deleting container ID f1b43a114d12d7820a2c43e3fe1c710596a426853c1dbefd213cefc8088ed213 from idIndex" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.230945457Z" level=info msg="createCtr: removing container f1b43a114d12d7820a2c43e3fe1c710596a426853c1dbefd213cefc8088ed213" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.230976669Z" level=info msg="createCtr: deleting container f1b43a114d12d7820a2c43e3fe1c710596a426853c1dbefd213cefc8088ed213 from storage" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.232965467Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-012915_kube-system_8a66ab49d7c80b396ab0e8b46c39b696_0" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.204652395Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=946c0224-2954-4597-abd9-48c739fd05e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.205506999Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=f1f8205b-13ab-48d1-89be-5ddfe7f89bfc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.206240102Z" level=info msg="Creating container: kube-system/etcd-functional-012915/etcd" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.206447331Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.210283193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.210863417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.228124139Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.229573851Z" level=info msg="createCtr: deleting container ID 1beefe15b63b796e652c01ac1f61b13690321cfccbd88674e7a5b2a56d2579c4 from idIndex" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.229621183Z" level=info msg="createCtr: removing container 1beefe15b63b796e652c01ac1f61b13690321cfccbd88674e7a5b2a56d2579c4" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.229659341Z" level=info msg="createCtr: deleting container 1beefe15b63b796e652c01ac1f61b13690321cfccbd88674e7a5b2a56d2579c4 from storage" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.231972859Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-012915_kube-system_d8a261ecdc32dae77705c4d6c0276f2f_0" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:06:56.985951   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:56.986457   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:56.988026   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:56.988513   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:56.990077   15734 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:06:57 up  2:49,  0 user,  load average: 0.16, 0.07, 0.19
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:06:50 functional-012915 kubelet[14964]: E1002 21:06:50.234323   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:06:50 functional-012915 kubelet[14964]:         container kube-apiserver start failed in pod kube-apiserver-functional-012915_kube-system(7482f03c4ea15852236655655d7fae39): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:06:50 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:06:50 functional-012915 kubelet[14964]: E1002 21:06:50.234356   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-012915" podUID="7482f03c4ea15852236655655d7fae39"
	Oct 02 21:06:51 functional-012915 kubelet[14964]: E1002 21:06:51.349849   14964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac86d10977047  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,LastTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-012915,}"
	Oct 02 21:06:51 functional-012915 kubelet[14964]: E1002 21:06:51.829284   14964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 21:06:51 functional-012915 kubelet[14964]: I1002 21:06:51.984192   14964 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 21:06:51 functional-012915 kubelet[14964]: E1002 21:06:51.984565   14964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 21:06:53 functional-012915 kubelet[14964]: E1002 21:06:53.205148   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:06:53 functional-012915 kubelet[14964]: E1002 21:06:53.233255   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:06:53 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:06:53 functional-012915 kubelet[14964]:  > podSandboxID="8fcd09580c94c358972341d218f18641fb01c2881f93974b0a738c79d068fdb3"
	Oct 02 21:06:53 functional-012915 kubelet[14964]: E1002 21:06:53.233360   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:06:53 functional-012915 kubelet[14964]:         container kube-scheduler start failed in pod kube-scheduler-functional-012915_kube-system(8a66ab49d7c80b396ab0e8b46c39b696): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:06:53 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:06:53 functional-012915 kubelet[14964]: E1002 21:06:53.233399   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-012915" podUID="8a66ab49d7c80b396ab0e8b46c39b696"
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.204278   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.218859   14964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-012915\" not found"
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.232216   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:06:55 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:06:55 functional-012915 kubelet[14964]:  > podSandboxID="0a35d159a682c6cd7da21a9fb2e3efef99f6f6c3f06af6071bd80e1de599842e"
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.232329   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:06:55 functional-012915 kubelet[14964]:         container etcd start failed in pod etcd-functional-012915_kube-system(d8a261ecdc32dae77705c4d6c0276f2f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:06:55 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.232366   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-012915" podUID="d8a261ecdc32dae77705c4d6c0276f2f"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (311.930508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (733.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-012915 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-012915 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (48.946112ms)

                                                
                                                
** stderr ** 
	E1002 21:06:57.770511  123114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:06:57.770878  123114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:06:57.772297  123114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:06:57.772569  123114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:06:57.774012  123114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-012915 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (293.135809ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                     ARGS                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ unpause │ nospam-461767 --log_dir /tmp/nospam-461767 unpause                                                            │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:39 UTC │ 02 Oct 25 20:39 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ stop    │ nospam-461767 --log_dir /tmp/nospam-461767 stop                                                               │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ delete  │ -p nospam-461767                                                                                              │ nospam-461767     │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │ 02 Oct 25 20:40 UTC │
	│ start   │ -p functional-012915 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:40 UTC │                     │
	│ start   │ -p functional-012915 --alsologtostderr -v=8                                                                   │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:48 UTC │                     │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:3.1                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:3.3                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add registry.k8s.io/pause:latest                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache add minikube-local-cache-test:functional-012915                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ functional-012915 cache delete minikube-local-cache-test:functional-012915                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ list                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl images                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl rmi registry.k8s.io/pause:latest                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	│ cache   │ functional-012915 cache reload                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ ssh     │ functional-012915 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                              │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ kubectl │ functional-012915 kubectl -- --context functional-012915 get pods                                             │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	│ start   │ -p functional-012915 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:54:43
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:54:43.844587  109844 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:54:43.844861  109844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:43.844865  109844 out.go:374] Setting ErrFile to fd 2...
	I1002 20:54:43.844868  109844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:43.845038  109844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:54:43.845491  109844 out.go:368] Setting JSON to false
	I1002 20:54:43.846405  109844 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9425,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:54:43.846500  109844 start.go:140] virtualization: kvm guest
	I1002 20:54:43.848999  109844 out.go:179] * [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:54:43.850877  109844 notify.go:220] Checking for updates...
	I1002 20:54:43.850921  109844 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:54:43.852793  109844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:54:43.854834  109844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:54:43.856692  109844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:54:43.858365  109844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:54:43.860403  109844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:54:43.863103  109844 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:54:43.863204  109844 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:54:43.889469  109844 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:54:43.889551  109844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:43.945234  109844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:54:43.934776618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:54:43.945360  109844 docker.go:318] overlay module found
	I1002 20:54:43.947426  109844 out.go:179] * Using the docker driver based on existing profile
	I1002 20:54:43.949164  109844 start.go:304] selected driver: docker
	I1002 20:54:43.949174  109844 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:43.949277  109844 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:54:43.949355  109844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:44.006056  109844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:54:43.996347889 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:54:44.006730  109844 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:54:44.006766  109844 cni.go:84] Creating CNI manager for ""
	I1002 20:54:44.006828  109844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:54:44.006872  109844 start.go:348] cluster config:
	{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:44.008980  109844 out.go:179] * Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	I1002 20:54:44.010355  109844 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 20:54:44.011760  109844 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:54:44.012903  109844 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:54:44.012938  109844 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:54:44.012951  109844 cache.go:58] Caching tarball of preloaded images
	I1002 20:54:44.012993  109844 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:54:44.013033  109844 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:54:44.013038  109844 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:54:44.013135  109844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json ...
	I1002 20:54:44.033578  109844 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:54:44.033589  109844 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:54:44.033606  109844 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:54:44.033634  109844 start.go:360] acquireMachinesLock for functional-012915: {Name:mk05b0465db6f8234fcb55c21a78a37886923b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:54:44.033690  109844 start.go:364] duration metric: took 42.12µs to acquireMachinesLock for "functional-012915"
	I1002 20:54:44.033704  109844 start.go:96] Skipping create...Using existing machine configuration
	I1002 20:54:44.033708  109844 fix.go:54] fixHost starting: 
	I1002 20:54:44.033949  109844 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:54:44.051193  109844 fix.go:112] recreateIfNeeded on functional-012915: state=Running err=<nil>
	W1002 20:54:44.051212  109844 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 20:54:44.053363  109844 out.go:252] * Updating the running docker "functional-012915" container ...
	I1002 20:54:44.053388  109844 machine.go:93] provisionDockerMachine start ...
	I1002 20:54:44.053449  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.071022  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.071263  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.071270  109844 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:54:44.215777  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:54:44.215796  109844 ubuntu.go:182] provisioning hostname "functional-012915"
	I1002 20:54:44.215846  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.233786  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.234003  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.234012  109844 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-012915 && echo "functional-012915" | sudo tee /etc/hostname
	I1002 20:54:44.386648  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:54:44.386732  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.405002  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.405287  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.405300  109844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-012915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-012915/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-012915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:54:44.550595  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:54:44.550613  109844 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 20:54:44.550630  109844 ubuntu.go:190] setting up certificates
	I1002 20:54:44.550637  109844 provision.go:84] configureAuth start
	I1002 20:54:44.550684  109844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:54:44.568931  109844 provision.go:143] copyHostCerts
	I1002 20:54:44.568985  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 20:54:44.569001  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:54:44.569078  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 20:54:44.569204  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 20:54:44.569210  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:54:44.569250  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 20:54:44.569359  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 20:54:44.569365  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:54:44.569398  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 20:54:44.569559  109844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.functional-012915 san=[127.0.0.1 192.168.49.2 functional-012915 localhost minikube]
	I1002 20:54:44.708488  109844 provision.go:177] copyRemoteCerts
	I1002 20:54:44.708542  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:54:44.708581  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.726299  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:44.828230  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:54:44.845801  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:54:44.864647  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:54:44.886083  109844 provision.go:87] duration metric: took 335.431145ms to configureAuth
	I1002 20:54:44.886105  109844 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:54:44.886322  109844 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:54:44.886449  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.904652  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.904873  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.904882  109844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:54:45.179966  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:54:45.179982  109844 machine.go:96] duration metric: took 1.12658745s to provisionDockerMachine
	I1002 20:54:45.179993  109844 start.go:293] postStartSetup for "functional-012915" (driver="docker")
	I1002 20:54:45.180006  109844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:54:45.180072  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:54:45.180106  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.198206  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.300487  109844 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:54:45.304200  109844 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:54:45.304220  109844 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:54:45.304236  109844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 20:54:45.304298  109844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 20:54:45.304376  109844 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 20:54:45.304448  109844 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> hosts in /etc/test/nested/copy/84100
	I1002 20:54:45.304489  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/84100
	I1002 20:54:45.312033  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:54:45.329488  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts --> /etc/test/nested/copy/84100/hosts (40 bytes)
	I1002 20:54:45.347685  109844 start.go:296] duration metric: took 167.67425ms for postStartSetup
	I1002 20:54:45.347776  109844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:54:45.347829  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.365819  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.465348  109844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:54:45.470042  109844 fix.go:56] duration metric: took 1.436324828s for fixHost
	I1002 20:54:45.470060  109844 start.go:83] releasing machines lock for "functional-012915", held for 1.436363927s
	I1002 20:54:45.470140  109844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:54:45.487689  109844 ssh_runner.go:195] Run: cat /version.json
	I1002 20:54:45.487729  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.487802  109844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:54:45.487851  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.505570  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.507416  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.673212  109844 ssh_runner.go:195] Run: systemctl --version
	I1002 20:54:45.680090  109844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:54:45.716457  109844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:54:45.721126  109844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:54:45.721199  109844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:54:45.729223  109844 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:54:45.729241  109844 start.go:495] detecting cgroup driver to use...
	I1002 20:54:45.729276  109844 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:54:45.729332  109844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:54:45.744221  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:54:45.757221  109844 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:54:45.757262  109844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:54:45.772166  109844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:54:45.785276  109844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:54:45.871303  109844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:54:45.959396  109844 docker.go:234] disabling docker service ...
	I1002 20:54:45.959460  109844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:54:45.974048  109844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:54:45.986376  109844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:54:46.071815  109844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:54:46.159773  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:54:46.172020  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:54:46.186483  109844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:54:46.186540  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.195504  109844 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:54:46.195591  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.205033  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.213732  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.222589  109844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:54:46.230603  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.239758  109844 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.248194  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.256956  109844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:54:46.264263  109844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:54:46.271577  109844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:54:46.354483  109844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:54:46.464818  109844 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:54:46.464871  109844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:54:46.468860  109844 start.go:563] Will wait 60s for crictl version
	I1002 20:54:46.468905  109844 ssh_runner.go:195] Run: which crictl
	I1002 20:54:46.472439  109844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:54:46.496177  109844 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:54:46.496237  109844 ssh_runner.go:195] Run: crio --version
	I1002 20:54:46.524348  109844 ssh_runner.go:195] Run: crio --version
	I1002 20:54:46.554038  109844 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:54:46.555482  109844 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:54:46.572825  109844 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:54:46.579140  109844 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 20:54:46.580455  109844 kubeadm.go:883] updating cluster {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:54:46.580599  109844 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:54:46.580680  109844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:54:46.615204  109844 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:54:46.615216  109844 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:54:46.615259  109844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:54:46.641403  109844 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:54:46.641428  109844 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:54:46.641435  109844 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:54:46.641523  109844 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:54:46.641593  109844 ssh_runner.go:195] Run: crio config
	I1002 20:54:46.685535  109844 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 20:54:46.685558  109844 cni.go:84] Creating CNI manager for ""
	I1002 20:54:46.685570  109844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:54:46.685580  109844 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:54:46.685603  109844 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-012915 NodeName:functional-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:54:46.685708  109844 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-012915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:54:46.685786  109844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:54:46.694168  109844 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:54:46.694220  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:54:46.701920  109844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:54:46.714502  109844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:54:46.726979  109844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 20:54:46.739184  109844 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:54:46.742937  109844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:54:46.828267  109844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:54:46.841290  109844 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915 for IP: 192.168.49.2
	I1002 20:54:46.841302  109844 certs.go:195] generating shared ca certs ...
	I1002 20:54:46.841315  109844 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:54:46.841439  109844 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 20:54:46.841480  109844 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 20:54:46.841486  109844 certs.go:257] generating profile certs ...
	I1002 20:54:46.841556  109844 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key
	I1002 20:54:46.841595  109844 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645
	I1002 20:54:46.841625  109844 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key
	I1002 20:54:46.841728  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 20:54:46.841789  109844 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 20:54:46.841795  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:54:46.841816  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:54:46.841847  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:54:46.841870  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 20:54:46.841921  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:54:46.842546  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:54:46.860833  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:54:46.878996  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:54:46.897504  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:54:46.914816  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:54:46.931903  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:54:46.948901  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:54:46.965859  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:54:46.982982  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 20:54:47.000600  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 20:54:47.018108  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:54:47.035448  109844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:54:47.047886  109844 ssh_runner.go:195] Run: openssl version
	I1002 20:54:47.053789  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 20:54:47.062187  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.066098  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.066148  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.100024  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 20:54:47.108632  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 20:54:47.118249  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.122176  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.122226  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.156807  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:54:47.165260  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:54:47.173954  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.177825  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.177879  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.212057  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:54:47.220716  109844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:54:47.224961  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:54:47.259305  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:54:47.293091  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:54:47.327486  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:54:47.361854  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:54:47.395871  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:54:47.429860  109844 kubeadm.go:400] StartCluster: {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:47.429950  109844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:54:47.429996  109844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:54:47.458514  109844 cri.go:89] found id: ""
	I1002 20:54:47.458565  109844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:54:47.466572  109844 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:54:47.466595  109844 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:54:47.466642  109844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:54:47.473967  109844 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.474578  109844 kubeconfig.go:125] found "functional-012915" server: "https://192.168.49.2:8441"
	I1002 20:54:47.476054  109844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:54:47.483705  109844 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 20:40:16.332502550 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 20:54:46.736875917 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 20:54:47.483713  109844 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:54:47.483724  109844 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 20:54:47.483782  109844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:54:47.509815  109844 cri.go:89] found id: ""
	I1002 20:54:47.509892  109844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:54:47.553124  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:54:47.561262  109844 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 20:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 20:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 20:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 20:44 /etc/kubernetes/scheduler.conf
	
	I1002 20:54:47.561322  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:54:47.569534  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:54:47.577441  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.577491  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:54:47.585032  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:54:47.592533  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.592581  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:54:47.600040  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:54:47.607328  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.607365  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:54:47.614787  109844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:54:47.622401  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:47.663022  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.396196  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.576311  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.625411  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.679287  109844 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:54:48.679369  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:49.179574  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:49.679973  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:50.180317  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:50.680215  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:51.179826  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:51.679618  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:52.180390  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:52.679884  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:53.180480  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:53.679973  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:54.180264  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:54.679704  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:55.179880  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:55.679789  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:56.179784  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:56.679611  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:57.179499  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:57.680068  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:58.179593  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:58.680342  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:59.180363  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:59.679719  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:00.180464  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:00.680219  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:01.179572  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:01.679989  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:02.179867  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:02.680465  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:03.179787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:03.680167  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:04.179791  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:04.679910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:05.179712  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:05.680091  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:06.179473  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:06.680424  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:07.179668  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:07.680232  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:08.180357  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:08.679960  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:09.180406  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:09.679893  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:10.180470  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:10.680102  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:11.180344  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:11.679766  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:12.180348  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:12.679643  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:13.180121  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:13.679815  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:14.179492  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:14.679526  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:15.180454  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:15.679641  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:16.180481  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:16.679596  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:17.179991  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:17.680447  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:18.179814  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:18.679604  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:19.180037  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:19.680355  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:20.180349  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:20.679595  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:21.179952  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:21.680267  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:22.179901  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:22.680376  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:23.180156  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:23.679931  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:24.180000  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:24.680128  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:25.179481  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:25.680099  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:26.180243  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:26.680414  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:27.180290  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:27.680286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:28.179866  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:28.680103  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:29.180483  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:29.680117  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:30.179477  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:30.679634  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:31.180114  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:31.680389  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:32.179833  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:32.679848  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:33.180002  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:33.679520  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:34.180220  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:34.679624  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:35.179932  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:35.679910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:36.180365  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:36.679590  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:37.179548  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:37.680243  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:38.179674  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:38.680191  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:39.179865  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:39.680176  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:40.179534  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:40.679913  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:41.180457  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:41.679626  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:42.179639  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:42.679943  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:43.179573  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:43.680221  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:44.180342  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:44.679876  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:45.180254  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:45.679532  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:46.180286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:46.679433  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:47.179977  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:47.679540  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:48.180382  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:48.679912  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:48.679971  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:48.706989  109844 cri.go:89] found id: ""
	I1002 20:55:48.707014  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.707020  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:48.707025  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:48.707071  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:48.733283  109844 cri.go:89] found id: ""
	I1002 20:55:48.733299  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.733306  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:48.733311  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:48.733361  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:48.761228  109844 cri.go:89] found id: ""
	I1002 20:55:48.761245  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.761250  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:48.761256  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:48.761313  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:48.788501  109844 cri.go:89] found id: ""
	I1002 20:55:48.788516  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.788522  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:48.788527  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:48.788579  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:48.814616  109844 cri.go:89] found id: ""
	I1002 20:55:48.814636  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.814646  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:48.814651  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:48.814703  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:48.841518  109844 cri.go:89] found id: ""
	I1002 20:55:48.841538  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.841548  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:48.841555  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:48.841624  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:48.869254  109844 cri.go:89] found id: ""
	I1002 20:55:48.869278  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.869288  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:48.869311  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:48.869335  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:48.883919  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:48.883937  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:48.941687  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:48.933979    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.935001    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.936618    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.937054    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.938614    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:48.933979    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.935001    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.936618    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.937054    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.938614    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:48.941698  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:48.941710  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:49.007787  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:49.007810  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:49.038133  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:49.038157  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:51.609461  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:51.620229  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:51.620296  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:51.647003  109844 cri.go:89] found id: ""
	I1002 20:55:51.647022  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.647028  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:51.647033  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:51.647087  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:51.673376  109844 cri.go:89] found id: ""
	I1002 20:55:51.673394  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.673402  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:51.673408  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:51.673467  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:51.700685  109844 cri.go:89] found id: ""
	I1002 20:55:51.700701  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.700719  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:51.700724  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:51.700792  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:51.726660  109844 cri.go:89] found id: ""
	I1002 20:55:51.726677  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.726684  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:51.726689  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:51.726762  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:51.753630  109844 cri.go:89] found id: ""
	I1002 20:55:51.753646  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.753652  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:51.753657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:51.753750  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:51.779127  109844 cri.go:89] found id: ""
	I1002 20:55:51.779146  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.779155  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:51.779161  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:51.779235  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:51.805960  109844 cri.go:89] found id: ""
	I1002 20:55:51.805979  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.805988  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:51.805997  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:51.806006  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:51.835916  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:51.835939  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:51.905127  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:51.905159  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:51.920189  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:51.920209  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:51.976010  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:51.969042    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.969686    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971200    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971624    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.973116    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:51.969042    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.969686    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971200    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971624    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.973116    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:51.976023  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:51.976035  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:54.539314  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:54.550248  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:54.550316  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:54.577239  109844 cri.go:89] found id: ""
	I1002 20:55:54.577254  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.577261  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:54.577265  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:54.577311  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:54.603907  109844 cri.go:89] found id: ""
	I1002 20:55:54.603927  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.603935  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:54.603941  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:54.603991  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:54.630527  109844 cri.go:89] found id: ""
	I1002 20:55:54.630543  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.630549  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:54.630562  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:54.630624  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:54.658661  109844 cri.go:89] found id: ""
	I1002 20:55:54.658680  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.658688  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:54.658693  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:54.658774  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:54.684747  109844 cri.go:89] found id: ""
	I1002 20:55:54.684769  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.684807  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:54.684814  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:54.684890  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:54.711715  109844 cri.go:89] found id: ""
	I1002 20:55:54.711732  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.711777  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:54.711785  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:54.711842  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:54.738961  109844 cri.go:89] found id: ""
	I1002 20:55:54.738979  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.738987  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:54.738996  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:54.739009  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:54.806223  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:54.806250  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:54.820749  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:54.820771  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:54.877826  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:54.870974    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.871493    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873132    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873593    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.875041    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:54.870974    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.871493    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873132    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873593    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.875041    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:54.877845  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:54.877872  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:54.943126  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:54.943152  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:57.473420  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:57.484300  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:57.484350  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:57.510256  109844 cri.go:89] found id: ""
	I1002 20:55:57.510274  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.510281  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:57.510285  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:57.510350  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:57.536726  109844 cri.go:89] found id: ""
	I1002 20:55:57.536756  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.536766  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:57.536773  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:57.536824  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:57.562388  109844 cri.go:89] found id: ""
	I1002 20:55:57.562407  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.562416  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:57.562421  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:57.562467  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:57.589542  109844 cri.go:89] found id: ""
	I1002 20:55:57.589569  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.589577  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:57.589582  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:57.589647  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:57.616763  109844 cri.go:89] found id: ""
	I1002 20:55:57.616781  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.616790  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:57.616796  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:57.616842  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:57.642618  109844 cri.go:89] found id: ""
	I1002 20:55:57.642637  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.642646  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:57.642652  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:57.642700  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:57.668671  109844 cri.go:89] found id: ""
	I1002 20:55:57.668686  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.668693  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:57.668700  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:57.668714  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:57.733001  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:57.733023  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:57.747314  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:57.747338  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:57.803286  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:57.796365    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.796951    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.798536    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.799065    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.800640    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:57.796365    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.796951    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.798536    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.799065    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.800640    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:57.803303  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:57.803316  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:57.869484  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:57.869515  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:00.399551  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:00.410170  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:00.410218  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:00.436280  109844 cri.go:89] found id: ""
	I1002 20:56:00.436299  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.436306  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:00.436313  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:00.436368  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:00.463444  109844 cri.go:89] found id: ""
	I1002 20:56:00.463461  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.463467  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:00.463479  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:00.463542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:00.489898  109844 cri.go:89] found id: ""
	I1002 20:56:00.489912  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.489919  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:00.489923  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:00.489970  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:00.516907  109844 cri.go:89] found id: ""
	I1002 20:56:00.516925  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.516932  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:00.516937  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:00.516987  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:00.543495  109844 cri.go:89] found id: ""
	I1002 20:56:00.543512  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.543519  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:00.543524  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:00.543575  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:00.569648  109844 cri.go:89] found id: ""
	I1002 20:56:00.569664  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.569670  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:00.569675  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:00.569722  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:00.596695  109844 cri.go:89] found id: ""
	I1002 20:56:00.596712  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.596719  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:00.596726  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:00.596756  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:00.664900  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:00.664923  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:00.679401  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:00.679420  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:00.736278  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:00.729378    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.729909    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731467    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731953    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.733441    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:00.729378    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.729909    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731467    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731953    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.733441    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:00.736292  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:00.736302  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:00.801067  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:00.801089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:03.333225  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:03.344042  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:03.344094  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:03.370652  109844 cri.go:89] found id: ""
	I1002 20:56:03.370668  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.370675  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:03.370680  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:03.370749  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:03.398592  109844 cri.go:89] found id: ""
	I1002 20:56:03.398609  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.398616  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:03.398621  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:03.398675  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:03.425268  109844 cri.go:89] found id: ""
	I1002 20:56:03.425284  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.425292  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:03.425297  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:03.425348  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:03.451631  109844 cri.go:89] found id: ""
	I1002 20:56:03.451645  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.451651  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:03.451655  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:03.451713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:03.476703  109844 cri.go:89] found id: ""
	I1002 20:56:03.476718  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.476728  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:03.476748  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:03.476804  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:03.502825  109844 cri.go:89] found id: ""
	I1002 20:56:03.502840  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.502847  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:03.502852  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:03.502897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:03.530314  109844 cri.go:89] found id: ""
	I1002 20:56:03.530330  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.530337  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:03.530345  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:03.530358  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:03.596281  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:03.596307  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:03.611117  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:03.611135  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:03.669231  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:03.661298    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.661803    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.663484    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.664056    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.665688    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:03.661298    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.661803    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.663484    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.664056    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.665688    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:03.669243  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:03.669254  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:03.735723  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:03.735761  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:06.266853  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:06.278118  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:06.278167  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:06.304229  109844 cri.go:89] found id: ""
	I1002 20:56:06.304246  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.304252  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:06.304258  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:06.304314  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:06.331492  109844 cri.go:89] found id: ""
	I1002 20:56:06.331510  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.331517  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:06.331522  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:06.331574  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:06.357300  109844 cri.go:89] found id: ""
	I1002 20:56:06.357319  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.357328  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:06.357333  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:06.357381  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:06.385072  109844 cri.go:89] found id: ""
	I1002 20:56:06.385092  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.385101  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:06.385107  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:06.385170  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:06.412479  109844 cri.go:89] found id: ""
	I1002 20:56:06.412499  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.412509  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:06.412516  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:06.412571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:06.439019  109844 cri.go:89] found id: ""
	I1002 20:56:06.439035  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.439042  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:06.439049  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:06.439105  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:06.466228  109844 cri.go:89] found id: ""
	I1002 20:56:06.466244  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.466250  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:06.466257  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:06.466268  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:06.530972  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:06.530997  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:06.546016  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:06.546039  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:06.604192  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:06.597141    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.597599    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.599321    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.600026    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.601244    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:06.597141    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.597599    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.599321    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.600026    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.601244    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:06.604215  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:06.604226  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:06.668313  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:06.668341  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:09.199470  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:09.210902  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:09.210947  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:09.237464  109844 cri.go:89] found id: ""
	I1002 20:56:09.237481  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.237488  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:09.237503  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:09.237549  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:09.264849  109844 cri.go:89] found id: ""
	I1002 20:56:09.264868  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.264876  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:09.264884  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:09.264944  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:09.291066  109844 cri.go:89] found id: ""
	I1002 20:56:09.291083  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.291088  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:09.291094  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:09.291141  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:09.316972  109844 cri.go:89] found id: ""
	I1002 20:56:09.316991  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.317001  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:09.317008  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:09.317066  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:09.342462  109844 cri.go:89] found id: ""
	I1002 20:56:09.342479  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.342488  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:09.342494  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:09.342560  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:09.369344  109844 cri.go:89] found id: ""
	I1002 20:56:09.369361  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.369370  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:09.369377  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:09.369431  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:09.396279  109844 cri.go:89] found id: ""
	I1002 20:56:09.396295  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.396301  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:09.396309  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:09.396325  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:09.462471  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:09.462495  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:09.477360  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:09.477379  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:09.533977  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:09.526956    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.527598    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529217    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529656    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.531136    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:09.526956    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.527598    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529217    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529656    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.531136    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:09.533991  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:09.534001  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:09.597829  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:09.597856  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:12.129375  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:12.140711  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:12.140778  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:12.167268  109844 cri.go:89] found id: ""
	I1002 20:56:12.167287  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.167295  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:12.167301  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:12.167351  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:12.193605  109844 cri.go:89] found id: ""
	I1002 20:56:12.193620  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.193625  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:12.193630  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:12.193674  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:12.220258  109844 cri.go:89] found id: ""
	I1002 20:56:12.220272  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.220279  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:12.220284  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:12.220342  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:12.246824  109844 cri.go:89] found id: ""
	I1002 20:56:12.246839  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.246845  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:12.246849  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:12.246897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:12.273611  109844 cri.go:89] found id: ""
	I1002 20:56:12.273631  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.273639  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:12.273647  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:12.273708  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:12.300838  109844 cri.go:89] found id: ""
	I1002 20:56:12.300856  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.300862  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:12.300868  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:12.300916  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:12.328414  109844 cri.go:89] found id: ""
	I1002 20:56:12.328429  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.328435  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:12.328442  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:12.328453  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:12.397603  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:12.397628  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:12.412076  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:12.412093  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:12.469369  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:12.462192    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.462709    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464313    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464791    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.466331    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:12.462192    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.462709    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464313    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464791    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.466331    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:12.469384  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:12.469399  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:12.530104  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:12.530130  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:15.060450  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:15.071089  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:15.071138  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:15.097730  109844 cri.go:89] found id: ""
	I1002 20:56:15.097766  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.097774  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:15.097783  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:15.097832  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:15.123349  109844 cri.go:89] found id: ""
	I1002 20:56:15.123366  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.123376  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:15.123382  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:15.123445  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:15.149644  109844 cri.go:89] found id: ""
	I1002 20:56:15.149659  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.149665  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:15.149670  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:15.149717  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:15.175442  109844 cri.go:89] found id: ""
	I1002 20:56:15.175464  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.175473  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:15.175480  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:15.175534  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:15.200859  109844 cri.go:89] found id: ""
	I1002 20:56:15.200875  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.200881  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:15.200886  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:15.200931  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:15.226770  109844 cri.go:89] found id: ""
	I1002 20:56:15.226786  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.226792  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:15.226797  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:15.226857  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:15.252444  109844 cri.go:89] found id: ""
	I1002 20:56:15.252462  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.252472  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:15.252480  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:15.252493  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:15.281148  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:15.281166  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:15.350382  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:15.350406  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:15.365144  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:15.365163  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:15.421764  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:15.414607    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.415162    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.416781    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.417290    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.418840    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:15.414607    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.415162    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.416781    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.417290    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.418840    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:15.421789  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:15.421802  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:17.982382  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:17.992951  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:17.992999  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:18.018834  109844 cri.go:89] found id: ""
	I1002 20:56:18.018853  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.018862  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:18.018869  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:18.018923  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:18.045169  109844 cri.go:89] found id: ""
	I1002 20:56:18.045186  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.045192  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:18.045196  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:18.045245  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:18.071187  109844 cri.go:89] found id: ""
	I1002 20:56:18.071202  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.071209  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:18.071213  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:18.071263  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:18.099002  109844 cri.go:89] found id: ""
	I1002 20:56:18.099021  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.099031  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:18.099037  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:18.099086  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:18.124458  109844 cri.go:89] found id: ""
	I1002 20:56:18.124474  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.124481  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:18.124486  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:18.124532  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:18.151052  109844 cri.go:89] found id: ""
	I1002 20:56:18.151070  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.151078  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:18.151086  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:18.151147  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:18.177070  109844 cri.go:89] found id: ""
	I1002 20:56:18.177088  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.177097  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:18.177106  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:18.177120  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:18.245531  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:18.245551  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:18.259536  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:18.259555  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:18.315828  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:18.309110    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.309608    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311154    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311572    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.313080    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:18.309110    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.309608    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311154    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311572    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.313080    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:18.315838  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:18.315849  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:18.378894  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:18.378917  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:20.910289  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:20.921508  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:20.921565  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:20.949001  109844 cri.go:89] found id: ""
	I1002 20:56:20.949015  109844 logs.go:282] 0 containers: []
	W1002 20:56:20.949022  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:20.949027  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:20.949073  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:20.975236  109844 cri.go:89] found id: ""
	I1002 20:56:20.975253  109844 logs.go:282] 0 containers: []
	W1002 20:56:20.975259  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:20.975264  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:20.975310  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:21.002161  109844 cri.go:89] found id: ""
	I1002 20:56:21.002176  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.002183  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:21.002188  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:21.002236  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:21.029183  109844 cri.go:89] found id: ""
	I1002 20:56:21.029203  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.029211  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:21.029218  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:21.029291  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:21.056171  109844 cri.go:89] found id: ""
	I1002 20:56:21.056187  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.056193  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:21.056198  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:21.056248  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:21.083782  109844 cri.go:89] found id: ""
	I1002 20:56:21.083801  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.083810  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:21.083817  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:21.083873  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:21.110480  109844 cri.go:89] found id: ""
	I1002 20:56:21.110496  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.110503  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:21.110512  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:21.110526  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:21.178200  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:21.178224  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:21.192348  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:21.192367  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:21.248832  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:21.241470    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.242149    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.243832    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.244309    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.245873    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:21.241470    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.242149    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.243832    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.244309    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.245873    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:21.248843  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:21.248866  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:21.313859  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:21.313939  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:23.844485  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:23.855704  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:23.855785  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:23.881987  109844 cri.go:89] found id: ""
	I1002 20:56:23.882003  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.882009  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:23.882014  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:23.882058  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:23.908092  109844 cri.go:89] found id: ""
	I1002 20:56:23.908109  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.908115  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:23.908121  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:23.908175  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:23.933489  109844 cri.go:89] found id: ""
	I1002 20:56:23.933503  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.933509  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:23.933514  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:23.933560  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:23.958962  109844 cri.go:89] found id: ""
	I1002 20:56:23.958978  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.958985  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:23.958991  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:23.959039  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:23.985206  109844 cri.go:89] found id: ""
	I1002 20:56:23.985222  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.985231  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:23.985237  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:23.985298  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:24.011436  109844 cri.go:89] found id: ""
	I1002 20:56:24.011453  109844 logs.go:282] 0 containers: []
	W1002 20:56:24.011460  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:24.011465  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:24.011512  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:24.036401  109844 cri.go:89] found id: ""
	I1002 20:56:24.036417  109844 logs.go:282] 0 containers: []
	W1002 20:56:24.036423  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:24.036431  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:24.036447  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:24.050446  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:24.050465  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:24.105883  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:24.099062    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.099587    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101050    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101530    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.103091    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:24.099062    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.099587    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101050    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101530    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.103091    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:24.105896  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:24.105906  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:24.165660  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:24.165683  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:24.194659  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:24.194677  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:26.765857  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:26.776723  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:26.776795  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:26.803878  109844 cri.go:89] found id: ""
	I1002 20:56:26.803894  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.803901  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:26.803906  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:26.803960  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:26.828926  109844 cri.go:89] found id: ""
	I1002 20:56:26.828944  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.828950  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:26.828955  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:26.829002  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:26.854812  109844 cri.go:89] found id: ""
	I1002 20:56:26.854828  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.854834  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:26.854840  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:26.854887  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:26.881665  109844 cri.go:89] found id: ""
	I1002 20:56:26.881682  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.881688  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:26.881693  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:26.881763  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:26.909265  109844 cri.go:89] found id: ""
	I1002 20:56:26.909284  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.909294  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:26.909301  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:26.909355  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:26.935117  109844 cri.go:89] found id: ""
	I1002 20:56:26.935133  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.935139  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:26.935144  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:26.935200  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:26.961377  109844 cri.go:89] found id: ""
	I1002 20:56:26.961392  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.961399  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:26.961406  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:26.961417  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:26.989187  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:26.989204  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:27.056354  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:27.056379  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:27.070926  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:27.070944  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:27.127442  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:27.119650    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.120189    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.122490    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.123013    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.124580    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:27.119650    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.120189    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.122490    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.123013    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.124580    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:27.127456  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:27.127473  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:29.687547  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:29.698733  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:29.698810  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:29.724706  109844 cri.go:89] found id: ""
	I1002 20:56:29.724721  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.724727  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:29.724732  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:29.724794  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:29.752274  109844 cri.go:89] found id: ""
	I1002 20:56:29.752291  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.752297  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:29.752308  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:29.752369  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:29.778792  109844 cri.go:89] found id: ""
	I1002 20:56:29.778807  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.778813  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:29.778818  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:29.778867  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:29.804447  109844 cri.go:89] found id: ""
	I1002 20:56:29.804468  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.804485  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:29.804490  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:29.804540  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:29.830280  109844 cri.go:89] found id: ""
	I1002 20:56:29.830301  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.830310  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:29.830316  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:29.830375  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:29.855193  109844 cri.go:89] found id: ""
	I1002 20:56:29.855209  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.855215  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:29.855220  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:29.855270  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:29.881092  109844 cri.go:89] found id: ""
	I1002 20:56:29.881107  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.881114  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:29.881122  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:29.881132  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:29.948531  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:29.948565  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:29.962996  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:29.963015  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:30.019733  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:30.012437    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.013106    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.014710    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.015163    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.016849    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:30.012437    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.013106    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.014710    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.015163    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.016849    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:30.019769  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:30.019784  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:30.080302  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:30.080332  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:32.612620  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:32.623619  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:32.623669  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:32.649868  109844 cri.go:89] found id: ""
	I1002 20:56:32.649884  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.649890  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:32.649895  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:32.649947  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:32.676993  109844 cri.go:89] found id: ""
	I1002 20:56:32.677011  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.677020  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:32.677026  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:32.677084  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:32.703005  109844 cri.go:89] found id: ""
	I1002 20:56:32.703026  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.703036  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:32.703042  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:32.703105  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:32.728641  109844 cri.go:89] found id: ""
	I1002 20:56:32.728657  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.728663  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:32.728668  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:32.728716  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:32.754904  109844 cri.go:89] found id: ""
	I1002 20:56:32.754922  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.754931  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:32.754938  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:32.754996  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:32.780607  109844 cri.go:89] found id: ""
	I1002 20:56:32.780623  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.780632  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:32.780638  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:32.780700  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:32.805534  109844 cri.go:89] found id: ""
	I1002 20:56:32.805549  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.805555  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:32.805564  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:32.805575  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:32.871168  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:32.871190  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:32.885484  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:32.885503  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:32.942338  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:32.935227    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.935814    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937470    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937975    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.939512    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:32.935227    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.935814    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937470    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937975    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.939512    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:32.942348  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:32.942361  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:33.006822  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:33.006849  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:35.539700  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:35.550793  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:35.550843  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:35.577123  109844 cri.go:89] found id: ""
	I1002 20:56:35.577141  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.577152  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:35.577158  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:35.577205  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:35.603414  109844 cri.go:89] found id: ""
	I1002 20:56:35.603429  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.603435  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:35.603440  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:35.603487  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:35.630119  109844 cri.go:89] found id: ""
	I1002 20:56:35.630139  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.630151  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:35.630161  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:35.630216  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:35.656385  109844 cri.go:89] found id: ""
	I1002 20:56:35.656400  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.656406  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:35.656410  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:35.656461  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:35.683092  109844 cri.go:89] found id: ""
	I1002 20:56:35.683109  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.683117  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:35.683121  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:35.683168  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:35.709629  109844 cri.go:89] found id: ""
	I1002 20:56:35.709644  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.709651  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:35.709657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:35.709713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:35.737006  109844 cri.go:89] found id: ""
	I1002 20:56:35.737025  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.737035  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:35.737043  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:35.737054  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:35.767533  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:35.767556  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:35.833953  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:35.833980  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:35.848818  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:35.848839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:35.906998  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:35.899806    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.900358    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.901937    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.902434    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.903965    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:35.899806    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.900358    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.901937    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.902434    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.903965    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:35.907011  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:35.907024  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:38.471319  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:38.481958  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:38.482010  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:38.507711  109844 cri.go:89] found id: ""
	I1002 20:56:38.507730  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.507751  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:38.507758  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:38.507820  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:38.534015  109844 cri.go:89] found id: ""
	I1002 20:56:38.534033  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.534039  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:38.534045  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:38.534096  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:38.561341  109844 cri.go:89] found id: ""
	I1002 20:56:38.561358  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.561367  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:38.561373  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:38.561433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:38.587872  109844 cri.go:89] found id: ""
	I1002 20:56:38.587891  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.587901  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:38.587907  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:38.587973  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:38.612399  109844 cri.go:89] found id: ""
	I1002 20:56:38.612418  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.612427  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:38.612433  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:38.612480  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:38.639104  109844 cri.go:89] found id: ""
	I1002 20:56:38.639120  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.639127  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:38.639132  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:38.639190  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:38.667322  109844 cri.go:89] found id: ""
	I1002 20:56:38.667339  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.667345  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:38.667352  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:38.667363  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:38.682168  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:38.682187  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:38.740651  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:38.733357    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.733969    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.735590    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.736050    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.737649    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:38.733357    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.733969    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.735590    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.736050    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.737649    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:38.740663  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:38.740674  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:38.805774  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:38.805798  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:38.835944  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:38.835962  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:41.406460  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:41.417553  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:41.417620  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:41.444684  109844 cri.go:89] found id: ""
	I1002 20:56:41.444698  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.444705  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:41.444710  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:41.444781  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:41.471352  109844 cri.go:89] found id: ""
	I1002 20:56:41.471370  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.471382  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:41.471390  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:41.471442  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:41.498686  109844 cri.go:89] found id: ""
	I1002 20:56:41.498702  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.498709  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:41.498714  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:41.498785  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:41.524449  109844 cri.go:89] found id: ""
	I1002 20:56:41.524463  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.524469  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:41.524478  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:41.524531  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:41.551827  109844 cri.go:89] found id: ""
	I1002 20:56:41.551845  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.551857  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:41.551864  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:41.551913  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:41.577898  109844 cri.go:89] found id: ""
	I1002 20:56:41.577918  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.577927  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:41.577933  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:41.577989  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:41.604237  109844 cri.go:89] found id: ""
	I1002 20:56:41.604254  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.604261  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:41.604270  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:41.604290  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:41.675907  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:41.675931  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:41.690491  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:41.690509  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:41.749157  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:41.742425    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.742947    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.744615    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.745122    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.746195    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:41.742425    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.742947    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.744615    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.745122    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.746195    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:41.749169  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:41.749184  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:41.815715  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:41.815751  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:44.347532  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:44.358694  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:44.358755  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:44.385917  109844 cri.go:89] found id: ""
	I1002 20:56:44.385932  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.385941  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:44.385946  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:44.385992  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:44.412267  109844 cri.go:89] found id: ""
	I1002 20:56:44.412283  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.412289  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:44.412293  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:44.412344  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:44.439227  109844 cri.go:89] found id: ""
	I1002 20:56:44.439242  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.439249  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:44.439253  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:44.439298  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:44.465395  109844 cri.go:89] found id: ""
	I1002 20:56:44.465411  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.465418  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:44.465423  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:44.465473  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:44.491435  109844 cri.go:89] found id: ""
	I1002 20:56:44.491452  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.491457  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:44.491462  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:44.491508  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:44.517875  109844 cri.go:89] found id: ""
	I1002 20:56:44.517892  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.517899  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:44.517904  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:44.517956  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:44.544412  109844 cri.go:89] found id: ""
	I1002 20:56:44.544428  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.544435  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:44.544443  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:44.544454  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:44.558619  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:44.558637  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:44.615090  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:44.608024    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.608566    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610178    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610634    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.612155    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:44.608024    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.608566    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610178    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610634    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.612155    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:44.615103  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:44.615115  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:44.675486  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:44.675509  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:44.704835  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:44.704853  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:47.280286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:47.291478  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:47.291529  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:47.318560  109844 cri.go:89] found id: ""
	I1002 20:56:47.318581  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.318586  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:47.318594  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:47.318648  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:47.344455  109844 cri.go:89] found id: ""
	I1002 20:56:47.344471  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.344477  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:47.344482  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:47.344527  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:47.370437  109844 cri.go:89] found id: ""
	I1002 20:56:47.370452  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.370458  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:47.370464  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:47.370532  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:47.396657  109844 cri.go:89] found id: ""
	I1002 20:56:47.396672  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.396678  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:47.396682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:47.396751  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:47.422143  109844 cri.go:89] found id: ""
	I1002 20:56:47.422166  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.422172  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:47.422178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:47.422230  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:47.447815  109844 cri.go:89] found id: ""
	I1002 20:56:47.447835  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.447844  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:47.447851  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:47.447910  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:47.473476  109844 cri.go:89] found id: ""
	I1002 20:56:47.473491  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.473498  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:47.473514  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:47.473528  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:47.487700  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:47.487722  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:47.544344  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:47.537160    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.537816    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539394    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539878    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.541420    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:47.537160    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.537816    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539394    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539878    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.541420    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:47.544360  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:47.544370  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:47.605987  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:47.606010  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:47.634796  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:47.634815  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:50.205345  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:50.216795  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:50.216856  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:50.242490  109844 cri.go:89] found id: ""
	I1002 20:56:50.242507  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.242516  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:50.242523  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:50.242599  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:50.269384  109844 cri.go:89] found id: ""
	I1002 20:56:50.269399  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.269405  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:50.269410  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:50.269455  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:50.294810  109844 cri.go:89] found id: ""
	I1002 20:56:50.294830  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.294839  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:50.294847  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:50.294900  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:50.321301  109844 cri.go:89] found id: ""
	I1002 20:56:50.321330  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.321339  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:50.321345  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:50.321396  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:50.348435  109844 cri.go:89] found id: ""
	I1002 20:56:50.348454  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.348463  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:50.348470  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:50.348521  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:50.375520  109844 cri.go:89] found id: ""
	I1002 20:56:50.375537  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.375544  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:50.375550  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:50.375612  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:50.401919  109844 cri.go:89] found id: ""
	I1002 20:56:50.401935  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.401941  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:50.401949  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:50.401960  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:50.474853  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:50.474878  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:50.489483  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:50.489502  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:50.546358  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:50.539620    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.540253    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.541729    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.542224    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.543673    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:50.539620    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.540253    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.541729    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.542224    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.543673    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:50.546371  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:50.546387  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:50.612342  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:50.612365  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:53.143229  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:53.154347  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:53.154399  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:53.179697  109844 cri.go:89] found id: ""
	I1002 20:56:53.179714  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.179722  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:53.179727  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:53.179796  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:53.206078  109844 cri.go:89] found id: ""
	I1002 20:56:53.206094  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.206102  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:53.206107  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:53.206161  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:53.232905  109844 cri.go:89] found id: ""
	I1002 20:56:53.232920  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.232929  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:53.232935  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:53.232990  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:53.258881  109844 cri.go:89] found id: ""
	I1002 20:56:53.258897  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.258903  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:53.258908  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:53.259002  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:53.286005  109844 cri.go:89] found id: ""
	I1002 20:56:53.286020  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.286026  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:53.286031  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:53.286077  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:53.311544  109844 cri.go:89] found id: ""
	I1002 20:56:53.311562  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.311572  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:53.311579  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:53.311642  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:53.338344  109844 cri.go:89] found id: ""
	I1002 20:56:53.338360  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.338366  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:53.338375  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:53.338391  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:53.394654  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:53.387661    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.388633    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.389809    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.390172    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.391803    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:53.387661    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.388633    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.389809    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.390172    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.391803    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:53.394666  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:53.394676  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:53.457101  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:53.457125  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:53.487445  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:53.487464  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:53.560767  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:53.560788  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:56.077698  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:56.088607  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:56.088653  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:56.115831  109844 cri.go:89] found id: ""
	I1002 20:56:56.115851  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.115860  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:56.115873  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:56.115930  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:56.143933  109844 cri.go:89] found id: ""
	I1002 20:56:56.143951  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.143960  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:56.143966  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:56.144013  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:56.170959  109844 cri.go:89] found id: ""
	I1002 20:56:56.170976  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.170983  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:56.170987  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:56.171041  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:56.198476  109844 cri.go:89] found id: ""
	I1002 20:56:56.198493  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.198502  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:56.198507  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:56.198553  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:56.225118  109844 cri.go:89] found id: ""
	I1002 20:56:56.225136  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.225144  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:56.225151  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:56.225203  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:56.250695  109844 cri.go:89] found id: ""
	I1002 20:56:56.250712  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.250719  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:56.250724  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:56.250798  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:56.277912  109844 cri.go:89] found id: ""
	I1002 20:56:56.277927  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.277933  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:56.277939  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:56.277949  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:56.348703  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:56.348726  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:56.363669  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:56.363691  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:56.421487  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:56.414561    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.415193    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.416833    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.417344    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.418421    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:56.414561    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.415193    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.416833    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.417344    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.418421    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:56.421501  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:56.421512  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:56.486234  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:56.486258  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:59.016061  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:59.027120  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:59.027174  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:59.055077  109844 cri.go:89] found id: ""
	I1002 20:56:59.055094  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.055100  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:59.055105  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:59.055154  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:59.080243  109844 cri.go:89] found id: ""
	I1002 20:56:59.080260  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.080267  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:59.080272  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:59.080321  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:59.105555  109844 cri.go:89] found id: ""
	I1002 20:56:59.105573  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.105582  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:59.105588  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:59.105643  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:59.131895  109844 cri.go:89] found id: ""
	I1002 20:56:59.131911  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.131918  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:59.131923  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:59.131971  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:59.158699  109844 cri.go:89] found id: ""
	I1002 20:56:59.158716  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.158724  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:59.158731  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:59.158813  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:59.184528  109844 cri.go:89] found id: ""
	I1002 20:56:59.184547  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.184553  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:59.184558  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:59.184621  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:59.210382  109844 cri.go:89] found id: ""
	I1002 20:56:59.210398  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.210406  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:59.210415  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:59.210435  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:59.274026  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:59.274049  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:59.303182  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:59.303199  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:59.372421  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:59.372446  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:59.388344  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:59.388367  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:59.449053  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:59.441943    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.442636    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.443715    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.444268    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.445829    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:59.441943    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.442636    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.443715    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.444268    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.445829    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:01.950787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:01.962421  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:01.962505  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:01.990756  109844 cri.go:89] found id: ""
	I1002 20:57:01.990774  109844 logs.go:282] 0 containers: []
	W1002 20:57:01.990781  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:01.990786  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:01.990835  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:02.018452  109844 cri.go:89] found id: ""
	I1002 20:57:02.018471  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.018480  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:02.018485  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:02.018568  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:02.046456  109844 cri.go:89] found id: ""
	I1002 20:57:02.046474  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.046481  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:02.046485  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:02.046549  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:02.074761  109844 cri.go:89] found id: ""
	I1002 20:57:02.074781  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.074794  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:02.074799  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:02.074859  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:02.102891  109844 cri.go:89] found id: ""
	I1002 20:57:02.102910  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.102919  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:02.102926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:02.102986  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:02.129478  109844 cri.go:89] found id: ""
	I1002 20:57:02.129496  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.129503  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:02.129509  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:02.129571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:02.157911  109844 cri.go:89] found id: ""
	I1002 20:57:02.157927  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.157934  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:02.157941  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:02.157954  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:02.216970  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:02.209199    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.209824    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211437    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211932    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.213815    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:02.209199    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.209824    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211437    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211932    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.213815    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:02.216979  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:02.216990  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:02.280811  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:02.280839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:02.310062  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:02.310084  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:02.379511  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:02.379536  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:04.894910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:04.906215  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:04.906297  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:04.934307  109844 cri.go:89] found id: ""
	I1002 20:57:04.934323  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.934330  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:04.934335  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:04.934388  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:04.961709  109844 cri.go:89] found id: ""
	I1002 20:57:04.961725  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.961731  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:04.961751  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:04.961803  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:04.988103  109844 cri.go:89] found id: ""
	I1002 20:57:04.988123  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.988134  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:04.988141  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:04.988204  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:05.015267  109844 cri.go:89] found id: ""
	I1002 20:57:05.015282  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.015293  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:05.015298  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:05.015347  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:05.042563  109844 cri.go:89] found id: ""
	I1002 20:57:05.042585  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.042592  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:05.042597  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:05.042648  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:05.070337  109844 cri.go:89] found id: ""
	I1002 20:57:05.070356  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.070365  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:05.070372  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:05.070426  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:05.096592  109844 cri.go:89] found id: ""
	I1002 20:57:05.096607  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.096613  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:05.096622  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:05.096635  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:05.169506  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:05.169529  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:05.184432  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:05.184452  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:05.241625  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:05.234636    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.235167    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.236774    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.237205    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.238801    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:05.234636    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.235167    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.236774    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.237205    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.238801    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:05.241643  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:05.241657  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:05.304652  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:05.304675  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:07.835766  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:07.847178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:07.847237  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:07.873351  109844 cri.go:89] found id: ""
	I1002 20:57:07.873370  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.873380  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:07.873387  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:07.873457  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:07.900684  109844 cri.go:89] found id: ""
	I1002 20:57:07.900700  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.900707  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:07.900713  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:07.900792  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:07.928661  109844 cri.go:89] found id: ""
	I1002 20:57:07.928677  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.928686  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:07.928692  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:07.928763  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:07.954556  109844 cri.go:89] found id: ""
	I1002 20:57:07.954573  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.954583  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:07.954589  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:07.954657  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:07.982818  109844 cri.go:89] found id: ""
	I1002 20:57:07.982833  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.982839  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:07.982845  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:07.982903  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:08.010107  109844 cri.go:89] found id: ""
	I1002 20:57:08.010123  109844 logs.go:282] 0 containers: []
	W1002 20:57:08.010129  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:08.010134  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:08.010183  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:08.037125  109844 cri.go:89] found id: ""
	I1002 20:57:08.037142  109844 logs.go:282] 0 containers: []
	W1002 20:57:08.037150  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:08.037157  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:08.037166  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:08.096417  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:08.096440  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:08.126218  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:08.126239  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:08.194545  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:08.194571  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:08.210281  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:08.210304  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:08.266772  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:08.260009   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.260455   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262035   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262436   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.264034   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:08.260009   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.260455   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262035   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262436   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.264034   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:10.768500  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:10.779701  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:10.779778  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:10.806553  109844 cri.go:89] found id: ""
	I1002 20:57:10.806570  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.806578  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:10.806583  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:10.806628  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:10.831907  109844 cri.go:89] found id: ""
	I1002 20:57:10.831921  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.831938  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:10.831942  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:10.831987  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:10.858755  109844 cri.go:89] found id: ""
	I1002 20:57:10.858773  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.858781  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:10.858786  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:10.858844  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:10.886428  109844 cri.go:89] found id: ""
	I1002 20:57:10.886451  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.886460  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:10.886467  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:10.886528  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:10.912297  109844 cri.go:89] found id: ""
	I1002 20:57:10.912336  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.912344  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:10.912351  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:10.912405  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:10.939017  109844 cri.go:89] found id: ""
	I1002 20:57:10.939037  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.939043  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:10.939050  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:10.939112  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:10.964149  109844 cri.go:89] found id: ""
	I1002 20:57:10.964166  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.964173  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:10.964181  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:10.964192  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:11.035194  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:11.035220  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:11.050083  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:11.050103  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:11.107489  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:11.100162   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.100777   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102350   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102866   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.104475   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:11.100162   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.100777   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102350   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102866   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.104475   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:11.107508  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:11.107525  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:11.168024  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:11.168048  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:13.699241  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:13.709921  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:13.709982  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:13.735975  109844 cri.go:89] found id: ""
	I1002 20:57:13.735994  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.736004  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:13.736010  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:13.736059  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:13.762999  109844 cri.go:89] found id: ""
	I1002 20:57:13.763017  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.763024  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:13.763029  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:13.763082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:13.790647  109844 cri.go:89] found id: ""
	I1002 20:57:13.790667  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.790676  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:13.790682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:13.790753  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:13.816587  109844 cri.go:89] found id: ""
	I1002 20:57:13.816607  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.816617  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:13.816623  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:13.816688  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:13.842814  109844 cri.go:89] found id: ""
	I1002 20:57:13.842829  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.842836  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:13.842841  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:13.842891  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:13.868241  109844 cri.go:89] found id: ""
	I1002 20:57:13.868260  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.868269  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:13.868275  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:13.868327  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:13.895111  109844 cri.go:89] found id: ""
	I1002 20:57:13.895128  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.895138  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:13.895147  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:13.895158  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:13.962125  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:13.962150  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:13.976779  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:13.976795  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:14.033771  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:14.027040   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.027554   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029207   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029659   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.031092   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:14.027040   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.027554   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029207   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029659   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.031092   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:14.033782  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:14.033792  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:14.097410  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:14.097434  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:16.629753  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:16.640873  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:16.640931  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:16.668538  109844 cri.go:89] found id: ""
	I1002 20:57:16.668557  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.668568  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:16.668574  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:16.668633  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:16.697564  109844 cri.go:89] found id: ""
	I1002 20:57:16.697595  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.697605  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:16.697612  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:16.697666  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:16.725228  109844 cri.go:89] found id: ""
	I1002 20:57:16.725242  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.725248  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:16.725253  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:16.725297  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:16.750995  109844 cri.go:89] found id: ""
	I1002 20:57:16.751010  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.751017  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:16.751022  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:16.751066  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:16.777779  109844 cri.go:89] found id: ""
	I1002 20:57:16.777796  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.777803  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:16.777809  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:16.777869  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:16.803504  109844 cri.go:89] found id: ""
	I1002 20:57:16.803521  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.803527  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:16.803532  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:16.803593  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:16.830272  109844 cri.go:89] found id: ""
	I1002 20:57:16.830287  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.830294  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:16.830302  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:16.830313  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:16.902383  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:16.902407  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:16.917396  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:16.917415  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:16.974693  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:16.966376   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.966932   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.968658   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.969953   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.970548   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:16.966376   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.966932   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.968658   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.969953   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.970548   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:16.974702  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:16.974713  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:17.035157  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:17.035179  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:19.566417  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:19.577676  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:19.577746  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:19.604005  109844 cri.go:89] found id: ""
	I1002 20:57:19.604021  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.604027  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:19.604032  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:19.604080  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:19.631397  109844 cri.go:89] found id: ""
	I1002 20:57:19.631415  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.631423  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:19.631433  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:19.631486  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:19.657474  109844 cri.go:89] found id: ""
	I1002 20:57:19.657491  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.657498  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:19.657502  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:19.657550  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:19.683165  109844 cri.go:89] found id: ""
	I1002 20:57:19.683183  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.683240  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:19.683248  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:19.683303  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:19.709607  109844 cri.go:89] found id: ""
	I1002 20:57:19.709623  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.709629  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:19.709634  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:19.709681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:19.736310  109844 cri.go:89] found id: ""
	I1002 20:57:19.736326  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.736333  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:19.736338  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:19.736388  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:19.763087  109844 cri.go:89] found id: ""
	I1002 20:57:19.763103  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.763109  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:19.763117  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:19.763130  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:19.777545  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:19.777563  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:19.835265  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:19.828219   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.828825   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830398   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830870   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.832345   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:19.828219   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.828825   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830398   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830870   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.832345   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:19.835276  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:19.835288  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:19.900559  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:19.900584  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:19.929602  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:19.929620  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:22.502944  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:22.514059  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:22.514108  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:22.540127  109844 cri.go:89] found id: ""
	I1002 20:57:22.540144  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.540152  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:22.540158  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:22.540229  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:22.566906  109844 cri.go:89] found id: ""
	I1002 20:57:22.566920  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.566929  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:22.566936  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:22.566988  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:22.593141  109844 cri.go:89] found id: ""
	I1002 20:57:22.593160  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.593170  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:22.593178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:22.593258  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:22.617379  109844 cri.go:89] found id: ""
	I1002 20:57:22.617395  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.617403  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:22.617408  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:22.617482  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:22.642997  109844 cri.go:89] found id: ""
	I1002 20:57:22.643015  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.643023  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:22.643030  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:22.643088  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:22.669891  109844 cri.go:89] found id: ""
	I1002 20:57:22.669910  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.669918  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:22.669925  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:22.669979  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:22.698537  109844 cri.go:89] found id: ""
	I1002 20:57:22.698553  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.698559  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:22.698571  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:22.698582  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:22.764795  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:22.764818  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:22.779339  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:22.779360  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:22.835541  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:22.828422   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.828970   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.830522   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.831086   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.832606   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:22.828422   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.828970   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.830522   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.831086   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.832606   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:22.835550  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:22.835561  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:22.893791  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:22.893816  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:25.423487  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:25.434946  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:25.435008  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:25.461262  109844 cri.go:89] found id: ""
	I1002 20:57:25.461278  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.461286  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:25.461293  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:25.461373  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:25.487938  109844 cri.go:89] found id: ""
	I1002 20:57:25.487954  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.487960  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:25.487965  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:25.488008  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:25.513819  109844 cri.go:89] found id: ""
	I1002 20:57:25.513833  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.513839  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:25.513844  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:25.513887  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:25.540047  109844 cri.go:89] found id: ""
	I1002 20:57:25.540064  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.540073  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:25.540080  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:25.540218  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:25.565240  109844 cri.go:89] found id: ""
	I1002 20:57:25.565256  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.565262  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:25.565267  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:25.565332  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:25.591199  109844 cri.go:89] found id: ""
	I1002 20:57:25.591214  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.591221  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:25.591226  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:25.591271  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:25.617021  109844 cri.go:89] found id: ""
	I1002 20:57:25.617040  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.617047  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:25.617055  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:25.617071  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:25.674861  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:25.668100   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.668693   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670241   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670676   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.672203   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:25.668100   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.668693   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670241   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670676   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.672203   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:25.674872  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:25.674887  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:25.735460  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:25.735487  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:25.765055  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:25.765071  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:25.833285  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:25.833307  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:28.348626  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:28.359370  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:28.359432  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:28.384665  109844 cri.go:89] found id: ""
	I1002 20:57:28.384681  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.384688  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:28.384692  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:28.384756  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:28.411127  109844 cri.go:89] found id: ""
	I1002 20:57:28.411142  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.411148  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:28.411153  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:28.411198  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:28.439419  109844 cri.go:89] found id: ""
	I1002 20:57:28.439433  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.439439  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:28.439444  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:28.439491  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:28.465419  109844 cri.go:89] found id: ""
	I1002 20:57:28.465434  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.465441  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:28.465446  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:28.465494  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:28.492080  109844 cri.go:89] found id: ""
	I1002 20:57:28.492098  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.492107  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:28.492114  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:28.492171  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:28.518199  109844 cri.go:89] found id: ""
	I1002 20:57:28.518215  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.518221  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:28.518226  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:28.518290  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:28.545226  109844 cri.go:89] found id: ""
	I1002 20:57:28.545241  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.545248  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:28.545255  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:28.545266  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:28.574035  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:28.574055  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:28.640805  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:28.640827  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:28.655177  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:28.655195  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:28.715784  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:28.707733   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.708329   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.710706   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.711235   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.712816   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:28.707733   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.708329   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.710706   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.711235   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.712816   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:28.715802  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:28.715813  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:31.282555  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:31.293415  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:31.293460  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:31.320069  109844 cri.go:89] found id: ""
	I1002 20:57:31.320084  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.320090  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:31.320096  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:31.320141  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:31.347288  109844 cri.go:89] found id: ""
	I1002 20:57:31.347308  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.347315  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:31.347319  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:31.347370  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:31.373910  109844 cri.go:89] found id: ""
	I1002 20:57:31.373926  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.373932  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:31.373936  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:31.373980  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:31.399488  109844 cri.go:89] found id: ""
	I1002 20:57:31.399504  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.399510  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:31.399515  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:31.399579  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:31.425794  109844 cri.go:89] found id: ""
	I1002 20:57:31.425809  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.425815  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:31.425824  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:31.425878  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:31.452232  109844 cri.go:89] found id: ""
	I1002 20:57:31.452247  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.452253  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:31.452258  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:31.452304  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:31.478189  109844 cri.go:89] found id: ""
	I1002 20:57:31.478208  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.478217  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:31.478226  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:31.478239  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:31.535213  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:31.527960   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.528553   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530059   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530507   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.532158   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:31.527960   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.528553   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530059   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530507   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.532158   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:31.535223  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:31.535235  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:31.596390  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:31.596416  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:31.625326  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:31.625347  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:31.695449  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:31.695470  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:34.210847  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:34.221612  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:34.221660  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:34.248100  109844 cri.go:89] found id: ""
	I1002 20:57:34.248118  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.248124  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:34.248129  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:34.248177  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:34.273928  109844 cri.go:89] found id: ""
	I1002 20:57:34.273947  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.273953  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:34.273958  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:34.274004  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:34.300659  109844 cri.go:89] found id: ""
	I1002 20:57:34.300677  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.300684  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:34.300688  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:34.300751  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:34.328932  109844 cri.go:89] found id: ""
	I1002 20:57:34.328950  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.328958  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:34.328964  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:34.329012  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:34.355289  109844 cri.go:89] found id: ""
	I1002 20:57:34.355305  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.355315  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:34.355320  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:34.355371  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:34.381635  109844 cri.go:89] found id: ""
	I1002 20:57:34.381651  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.381658  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:34.381664  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:34.381713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:34.406539  109844 cri.go:89] found id: ""
	I1002 20:57:34.406558  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.406567  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:34.406575  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:34.406586  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:34.476613  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:34.476637  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:34.491529  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:34.491545  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:34.548604  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:34.541411   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.541857   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543425   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543873   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.545469   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:34.541411   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.541857   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543425   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543873   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.545469   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:34.548616  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:34.548627  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:34.614034  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:34.614057  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:37.146000  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:37.156680  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:37.156731  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:37.183104  109844 cri.go:89] found id: ""
	I1002 20:57:37.183120  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.183126  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:37.183130  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:37.183180  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:37.209542  109844 cri.go:89] found id: ""
	I1002 20:57:37.209561  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.209570  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:37.209593  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:37.209651  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:37.236887  109844 cri.go:89] found id: ""
	I1002 20:57:37.236902  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.236907  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:37.236912  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:37.236955  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:37.263572  109844 cri.go:89] found id: ""
	I1002 20:57:37.263590  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.263600  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:37.263606  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:37.263670  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:37.290064  109844 cri.go:89] found id: ""
	I1002 20:57:37.290081  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.290088  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:37.290092  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:37.290140  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:37.315854  109844 cri.go:89] found id: ""
	I1002 20:57:37.315870  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.315877  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:37.315881  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:37.315928  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:37.341863  109844 cri.go:89] found id: ""
	I1002 20:57:37.341881  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.341888  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:37.341896  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:37.341906  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:37.370994  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:37.371009  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:37.436106  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:37.436137  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:37.451121  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:37.451149  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:37.506868  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:37.499823   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.500382   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.501949   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.502458   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.504014   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:37.499823   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.500382   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.501949   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.502458   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.504014   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:37.506882  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:37.506894  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:40.067997  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:40.078961  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:40.079015  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:40.104825  109844 cri.go:89] found id: ""
	I1002 20:57:40.104841  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.104848  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:40.104853  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:40.104901  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:40.131395  109844 cri.go:89] found id: ""
	I1002 20:57:40.131410  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.131417  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:40.131421  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:40.131472  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:40.156879  109844 cri.go:89] found id: ""
	I1002 20:57:40.156894  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.156900  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:40.156904  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:40.156950  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:40.184037  109844 cri.go:89] found id: ""
	I1002 20:57:40.184052  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.184058  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:40.184063  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:40.184109  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:40.209631  109844 cri.go:89] found id: ""
	I1002 20:57:40.209645  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.209652  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:40.209657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:40.209718  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:40.235959  109844 cri.go:89] found id: ""
	I1002 20:57:40.235974  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.235981  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:40.235985  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:40.236031  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:40.263268  109844 cri.go:89] found id: ""
	I1002 20:57:40.263295  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.263303  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:40.263312  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:40.263329  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:40.277655  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:40.277674  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:40.333759  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:40.326797   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.327375   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.328853   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.329279   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.330917   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:40.326797   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.327375   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.328853   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.329279   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.330917   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:40.333771  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:40.333782  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:40.398547  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:40.398573  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:40.429055  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:40.429075  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:43.000960  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:43.011533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:43.011594  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:43.038639  109844 cri.go:89] found id: ""
	I1002 20:57:43.038658  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.038664  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:43.038670  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:43.038718  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:43.064610  109844 cri.go:89] found id: ""
	I1002 20:57:43.064629  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.064638  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:43.064645  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:43.064692  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:43.092797  109844 cri.go:89] found id: ""
	I1002 20:57:43.092814  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.092829  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:43.092836  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:43.092905  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:43.117372  109844 cri.go:89] found id: ""
	I1002 20:57:43.117390  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.117398  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:43.117405  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:43.117455  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:43.143883  109844 cri.go:89] found id: ""
	I1002 20:57:43.143898  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.143903  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:43.143908  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:43.143954  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:43.168684  109844 cri.go:89] found id: ""
	I1002 20:57:43.168703  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.168711  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:43.168719  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:43.168794  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:43.194200  109844 cri.go:89] found id: ""
	I1002 20:57:43.194219  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.194226  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:43.194233  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:43.194243  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:43.224696  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:43.224716  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:43.292485  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:43.292511  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:43.307408  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:43.307426  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:43.365123  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:43.357900   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.358436   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360055   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360531   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.362200   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:43.357900   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.358436   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360055   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360531   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.362200   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:43.365138  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:43.365151  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:45.930176  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:45.940786  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:45.940834  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:45.966149  109844 cri.go:89] found id: ""
	I1002 20:57:45.966163  109844 logs.go:282] 0 containers: []
	W1002 20:57:45.966170  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:45.966174  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:45.966229  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:45.991076  109844 cri.go:89] found id: ""
	I1002 20:57:45.991091  109844 logs.go:282] 0 containers: []
	W1002 20:57:45.991098  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:45.991103  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:45.991160  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:46.016684  109844 cri.go:89] found id: ""
	I1002 20:57:46.016699  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.016707  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:46.016712  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:46.016783  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:46.044048  109844 cri.go:89] found id: ""
	I1002 20:57:46.044066  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.044075  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:46.044080  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:46.044126  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:46.072438  109844 cri.go:89] found id: ""
	I1002 20:57:46.072458  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.072463  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:46.072468  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:46.072513  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:46.098408  109844 cri.go:89] found id: ""
	I1002 20:57:46.098427  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.098435  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:46.098440  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:46.098494  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:46.125237  109844 cri.go:89] found id: ""
	I1002 20:57:46.125253  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.125260  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:46.125267  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:46.125279  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:46.181454  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:46.174705   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.175269   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.176884   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.177274   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.178794   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:46.174705   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.175269   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.176884   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.177274   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.178794   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:46.181465  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:46.181477  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:46.245377  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:46.245400  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:46.273829  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:46.273850  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:46.343515  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:46.343537  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:48.859249  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:48.870377  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:48.870433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:48.897669  109844 cri.go:89] found id: ""
	I1002 20:57:48.897687  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.897694  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:48.897699  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:48.897762  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:48.925008  109844 cri.go:89] found id: ""
	I1002 20:57:48.925023  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.925030  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:48.925036  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:48.925083  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:48.951643  109844 cri.go:89] found id: ""
	I1002 20:57:48.951657  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.951664  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:48.951668  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:48.951714  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:48.979002  109844 cri.go:89] found id: ""
	I1002 20:57:48.979020  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.979029  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:48.979036  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:48.979093  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:49.004625  109844 cri.go:89] found id: ""
	I1002 20:57:49.004641  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.004648  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:49.004652  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:49.004701  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:49.031772  109844 cri.go:89] found id: ""
	I1002 20:57:49.031788  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.031793  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:49.031805  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:49.031862  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:49.057980  109844 cri.go:89] found id: ""
	I1002 20:57:49.057996  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.058004  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:49.058013  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:49.058023  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:49.124248  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:49.124270  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:49.138512  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:49.138533  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:49.195138  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:49.187056   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.188681   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.189138   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.190686   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.191107   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:49.187056   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.188681   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.189138   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.190686   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.191107   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:49.195151  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:49.195173  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:49.258973  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:49.258997  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:51.791466  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:51.802977  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:51.803035  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:51.828498  109844 cri.go:89] found id: ""
	I1002 20:57:51.828514  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.828521  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:51.828526  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:51.828588  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:51.854342  109844 cri.go:89] found id: ""
	I1002 20:57:51.854360  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.854371  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:51.854378  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:51.854456  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:51.880507  109844 cri.go:89] found id: ""
	I1002 20:57:51.880524  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.880532  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:51.880537  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:51.880595  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:51.905868  109844 cri.go:89] found id: ""
	I1002 20:57:51.905885  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.905899  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:51.905906  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:51.905958  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:51.931501  109844 cri.go:89] found id: ""
	I1002 20:57:51.931520  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.931527  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:51.931533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:51.931584  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:51.959507  109844 cri.go:89] found id: ""
	I1002 20:57:51.959531  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.959537  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:51.959543  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:51.959597  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:51.986060  109844 cri.go:89] found id: ""
	I1002 20:57:51.986075  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.986082  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:51.986090  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:51.986102  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:52.001242  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:52.001265  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:52.058943  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:52.051510   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.052186   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.053757   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.054153   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.055841   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:52.051510   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.052186   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.053757   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.054153   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.055841   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:52.058955  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:52.058966  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:52.124165  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:52.124189  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:52.153884  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:52.153905  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:54.722906  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:54.734175  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:54.734232  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:54.759813  109844 cri.go:89] found id: ""
	I1002 20:57:54.759827  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.759834  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:54.759839  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:54.759886  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:54.786211  109844 cri.go:89] found id: ""
	I1002 20:57:54.786228  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.786234  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:54.786238  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:54.786296  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:54.812209  109844 cri.go:89] found id: ""
	I1002 20:57:54.812224  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.812231  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:54.812235  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:54.812279  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:54.838338  109844 cri.go:89] found id: ""
	I1002 20:57:54.838354  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.838359  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:54.838364  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:54.838409  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:54.864235  109844 cri.go:89] found id: ""
	I1002 20:57:54.864250  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.864257  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:54.864262  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:54.864313  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:54.889322  109844 cri.go:89] found id: ""
	I1002 20:57:54.889338  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.889345  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:54.889350  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:54.889408  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:54.914375  109844 cri.go:89] found id: ""
	I1002 20:57:54.914389  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.914396  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:54.914403  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:54.914413  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:54.982673  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:54.982695  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:54.997624  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:54.997643  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:55.054906  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:55.047912   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.048515   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050118   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050555   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.052232   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:55.047912   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.048515   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050118   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050555   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.052232   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:55.054918  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:55.054930  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:55.114767  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:55.114791  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:57.644999  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:57.656449  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:57.656504  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:57.681519  109844 cri.go:89] found id: ""
	I1002 20:57:57.681536  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.681547  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:57.681562  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:57.681613  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:57.707282  109844 cri.go:89] found id: ""
	I1002 20:57:57.707299  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.707306  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:57.707311  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:57.707368  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:57.733730  109844 cri.go:89] found id: ""
	I1002 20:57:57.733764  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.733773  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:57.733779  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:57.733829  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:57.759892  109844 cri.go:89] found id: ""
	I1002 20:57:57.759910  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.759919  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:57.759930  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:57.759977  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:57.786461  109844 cri.go:89] found id: ""
	I1002 20:57:57.786480  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.786488  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:57.786494  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:57.786554  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:57.811498  109844 cri.go:89] found id: ""
	I1002 20:57:57.811513  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.811520  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:57.811525  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:57.811584  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:57.838643  109844 cri.go:89] found id: ""
	I1002 20:57:57.838658  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.838664  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:57.838672  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:57.838683  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:57.903092  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:57.903112  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:57.917294  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:57.917313  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:57.973186  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:57.965977   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.966517   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968135   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968620   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.970155   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:57.965977   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.966517   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968135   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968620   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.970155   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:57.973196  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:57.973206  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:58.037591  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:58.037615  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:00.568697  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:00.579453  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:00.579509  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:00.605205  109844 cri.go:89] found id: ""
	I1002 20:58:00.605221  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.605228  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:00.605236  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:00.605281  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:00.630667  109844 cri.go:89] found id: ""
	I1002 20:58:00.630683  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.630690  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:00.630695  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:00.630779  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:00.656328  109844 cri.go:89] found id: ""
	I1002 20:58:00.656343  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.656349  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:00.656356  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:00.656404  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:00.687352  109844 cri.go:89] found id: ""
	I1002 20:58:00.687372  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.687380  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:00.687387  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:00.687450  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:00.715971  109844 cri.go:89] found id: ""
	I1002 20:58:00.715989  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.715996  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:00.716001  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:00.716051  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:00.743250  109844 cri.go:89] found id: ""
	I1002 20:58:00.743267  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.743274  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:00.743279  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:00.743337  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:00.768377  109844 cri.go:89] found id: ""
	I1002 20:58:00.768394  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.768402  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:00.768410  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:00.768421  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:00.836309  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:00.836330  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:00.851074  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:00.851091  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:00.909067  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:00.901998   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.902472   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904121   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904638   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.906303   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:00.901998   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.902472   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904121   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904638   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.906303   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:00.909078  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:00.909089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:00.967974  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:00.967996  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:03.498950  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:03.509660  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:03.509721  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:03.535662  109844 cri.go:89] found id: ""
	I1002 20:58:03.535677  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.535684  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:03.535689  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:03.535733  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:03.561250  109844 cri.go:89] found id: ""
	I1002 20:58:03.561265  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.561272  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:03.561277  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:03.561321  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:03.587048  109844 cri.go:89] found id: ""
	I1002 20:58:03.587067  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.587076  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:03.587083  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:03.587147  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:03.613674  109844 cri.go:89] found id: ""
	I1002 20:58:03.613690  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.613697  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:03.613702  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:03.613769  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:03.640328  109844 cri.go:89] found id: ""
	I1002 20:58:03.640347  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.640355  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:03.640361  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:03.640422  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:03.666291  109844 cri.go:89] found id: ""
	I1002 20:58:03.666312  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.666319  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:03.666331  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:03.666382  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:03.691967  109844 cri.go:89] found id: ""
	I1002 20:58:03.691985  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.691992  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:03.692006  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:03.692016  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:03.759409  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:03.759439  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:03.774258  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:03.774279  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:03.832338  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:03.825592   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.826120   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.827704   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.828142   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.829691   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:03.825592   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.826120   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.827704   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.828142   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.829691   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:03.832353  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:03.832368  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:03.893996  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:03.894020  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:06.425787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:06.436589  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:06.436637  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:06.462848  109844 cri.go:89] found id: ""
	I1002 20:58:06.462863  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.462870  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:06.462876  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:06.462923  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:06.488755  109844 cri.go:89] found id: ""
	I1002 20:58:06.488775  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.488784  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:06.488790  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:06.488840  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:06.514901  109844 cri.go:89] found id: ""
	I1002 20:58:06.514916  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.514922  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:06.514927  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:06.514970  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:06.541198  109844 cri.go:89] found id: ""
	I1002 20:58:06.541216  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.541222  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:06.541227  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:06.541274  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:06.566811  109844 cri.go:89] found id: ""
	I1002 20:58:06.566829  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.566835  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:06.566839  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:06.566889  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:06.592998  109844 cri.go:89] found id: ""
	I1002 20:58:06.593016  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.593025  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:06.593032  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:06.593082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:06.619126  109844 cri.go:89] found id: ""
	I1002 20:58:06.619142  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.619149  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:06.619156  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:06.619169  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:06.688927  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:06.688949  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:06.703470  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:06.703489  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:06.759531  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:06.752604   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.753172   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.754947   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.755395   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.756902   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:06.752604   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.753172   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.754947   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.755395   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.756902   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:06.759547  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:06.759558  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:06.821429  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:06.821453  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:09.350584  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:09.361407  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:09.361457  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:09.387670  109844 cri.go:89] found id: ""
	I1002 20:58:09.387686  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.387692  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:09.387697  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:09.387769  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:09.414282  109844 cri.go:89] found id: ""
	I1002 20:58:09.414297  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.414303  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:09.414308  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:09.414359  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:09.439986  109844 cri.go:89] found id: ""
	I1002 20:58:09.440004  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.440013  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:09.440021  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:09.440078  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:09.465260  109844 cri.go:89] found id: ""
	I1002 20:58:09.465274  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.465279  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:09.465284  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:09.465342  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:09.490459  109844 cri.go:89] found id: ""
	I1002 20:58:09.490475  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.490485  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:09.490492  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:09.490542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:09.517572  109844 cri.go:89] found id: ""
	I1002 20:58:09.517589  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.517597  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:09.517604  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:09.517657  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:09.543171  109844 cri.go:89] found id: ""
	I1002 20:58:09.543190  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.543200  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:09.543210  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:09.543224  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:09.610811  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:09.610836  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:09.625732  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:09.625765  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:09.684133  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:09.677059   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.677657   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679235   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679641   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.681326   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:09.677059   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.677657   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679235   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679641   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.681326   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:09.684159  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:09.684172  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:09.750121  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:09.750146  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:12.281914  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:12.292614  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:12.292681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:12.319213  109844 cri.go:89] found id: ""
	I1002 20:58:12.319229  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.319236  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:12.319241  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:12.319307  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:12.346475  109844 cri.go:89] found id: ""
	I1002 20:58:12.346491  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.346497  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:12.346506  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:12.346558  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:12.373396  109844 cri.go:89] found id: ""
	I1002 20:58:12.373412  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.373418  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:12.373422  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:12.373472  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:12.399960  109844 cri.go:89] found id: ""
	I1002 20:58:12.399975  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.399984  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:12.399990  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:12.400046  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:12.426115  109844 cri.go:89] found id: ""
	I1002 20:58:12.426134  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.426143  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:12.426148  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:12.426199  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:12.453989  109844 cri.go:89] found id: ""
	I1002 20:58:12.454005  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.454012  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:12.454017  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:12.454082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:12.480468  109844 cri.go:89] found id: ""
	I1002 20:58:12.480482  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.480489  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:12.480497  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:12.480506  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:12.546963  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:12.546987  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:12.561865  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:12.561884  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:12.618630  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:12.611604   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.612174   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.613811   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.614220   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.615797   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:12.611604   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.612174   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.613811   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.614220   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.615797   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:12.618644  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:12.618659  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:12.679779  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:12.679800  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:15.211438  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:15.222920  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:15.222984  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:15.249459  109844 cri.go:89] found id: ""
	I1002 20:58:15.249477  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.249486  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:15.249493  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:15.249563  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:15.275298  109844 cri.go:89] found id: ""
	I1002 20:58:15.275317  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.275324  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:15.275329  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:15.275376  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:15.301700  109844 cri.go:89] found id: ""
	I1002 20:58:15.301716  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.301722  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:15.301730  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:15.301798  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:15.329414  109844 cri.go:89] found id: ""
	I1002 20:58:15.329435  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.329442  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:15.329449  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:15.329509  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:15.355068  109844 cri.go:89] found id: ""
	I1002 20:58:15.355085  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.355093  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:15.355098  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:15.355148  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:15.380359  109844 cri.go:89] found id: ""
	I1002 20:58:15.380376  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.380383  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:15.380388  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:15.380447  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:15.407083  109844 cri.go:89] found id: ""
	I1002 20:58:15.407100  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.407107  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:15.407114  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:15.407125  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:15.475929  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:15.475952  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:15.490571  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:15.490597  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:15.548455  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:15.541509   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.542074   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.543830   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.544263   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.545369   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:15.541509   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.542074   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.543830   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.544263   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.545369   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:15.548470  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:15.548492  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:15.612985  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:15.613011  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:18.144173  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:18.154768  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:18.154839  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:18.181108  109844 cri.go:89] found id: ""
	I1002 20:58:18.181127  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.181135  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:18.181142  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:18.181211  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:18.207541  109844 cri.go:89] found id: ""
	I1002 20:58:18.207557  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.207564  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:18.207568  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:18.207617  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:18.234607  109844 cri.go:89] found id: ""
	I1002 20:58:18.234623  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.234630  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:18.234635  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:18.234682  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:18.262449  109844 cri.go:89] found id: ""
	I1002 20:58:18.262465  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.262471  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:18.262476  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:18.262525  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:18.288587  109844 cri.go:89] found id: ""
	I1002 20:58:18.288604  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.288611  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:18.288615  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:18.288671  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:18.315591  109844 cri.go:89] found id: ""
	I1002 20:58:18.315608  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.315616  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:18.315623  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:18.315686  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:18.341916  109844 cri.go:89] found id: ""
	I1002 20:58:18.341934  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.341943  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:18.341953  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:18.341967  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:18.409370  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:18.409397  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:18.423940  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:18.423957  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:18.481317  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:18.474299   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.474857   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476482   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476953   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.478581   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:18.474299   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.474857   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476482   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476953   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.478581   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:18.481328  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:18.481341  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:18.544851  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:18.544915  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:21.076714  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:21.087984  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:21.088035  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:21.114553  109844 cri.go:89] found id: ""
	I1002 20:58:21.114567  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.114574  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:21.114579  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:21.114627  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:21.140623  109844 cri.go:89] found id: ""
	I1002 20:58:21.140640  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.140647  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:21.140652  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:21.140709  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:21.167287  109844 cri.go:89] found id: ""
	I1002 20:58:21.167303  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.167310  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:21.167314  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:21.167366  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:21.192955  109844 cri.go:89] found id: ""
	I1002 20:58:21.192970  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.192976  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:21.192981  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:21.193026  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:21.218443  109844 cri.go:89] found id: ""
	I1002 20:58:21.218461  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.218470  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:21.218477  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:21.218543  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:21.245610  109844 cri.go:89] found id: ""
	I1002 20:58:21.245629  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.245636  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:21.245641  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:21.245705  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:21.274044  109844 cri.go:89] found id: ""
	I1002 20:58:21.274062  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.274071  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:21.274082  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:21.274094  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:21.344823  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:21.344846  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:21.359586  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:21.359607  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:21.415715  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:21.408650   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.409207   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.410856   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.411238   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.412941   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:21.408650   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.409207   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.410856   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.411238   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.412941   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:21.415727  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:21.415761  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:21.481719  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:21.481748  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:24.012099  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:24.023176  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:24.023230  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:24.048833  109844 cri.go:89] found id: ""
	I1002 20:58:24.048848  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.048854  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:24.048859  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:24.048910  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:24.075718  109844 cri.go:89] found id: ""
	I1002 20:58:24.075734  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.075760  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:24.075767  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:24.075820  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:24.102393  109844 cri.go:89] found id: ""
	I1002 20:58:24.102408  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.102415  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:24.102420  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:24.102470  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:24.128211  109844 cri.go:89] found id: ""
	I1002 20:58:24.128226  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.128233  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:24.128237  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:24.128295  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:24.154298  109844 cri.go:89] found id: ""
	I1002 20:58:24.154317  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.154337  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:24.154342  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:24.154400  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:24.180259  109844 cri.go:89] found id: ""
	I1002 20:58:24.180279  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.180289  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:24.180294  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:24.180343  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:24.206017  109844 cri.go:89] found id: ""
	I1002 20:58:24.206032  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.206038  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:24.206045  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:24.206057  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:24.262477  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:24.255581   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.256099   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.257667   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.258105   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.259636   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:24.255581   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.256099   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.257667   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.258105   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.259636   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:24.262487  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:24.262499  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:24.326558  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:24.326583  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:24.357911  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:24.357927  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:24.425144  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:24.425170  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:26.942340  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:26.953162  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:26.953210  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:26.977629  109844 cri.go:89] found id: ""
	I1002 20:58:26.977645  109844 logs.go:282] 0 containers: []
	W1002 20:58:26.977652  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:26.977656  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:26.977701  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:27.003794  109844 cri.go:89] found id: ""
	I1002 20:58:27.003810  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.003817  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:27.003821  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:27.003871  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:27.031644  109844 cri.go:89] found id: ""
	I1002 20:58:27.031662  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.031669  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:27.031673  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:27.031723  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:27.058490  109844 cri.go:89] found id: ""
	I1002 20:58:27.058522  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.058529  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:27.058533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:27.058580  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:27.083451  109844 cri.go:89] found id: ""
	I1002 20:58:27.083468  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.083475  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:27.083480  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:27.083536  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:27.108449  109844 cri.go:89] found id: ""
	I1002 20:58:27.108467  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.108475  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:27.108481  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:27.108542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:27.135415  109844 cri.go:89] found id: ""
	I1002 20:58:27.135433  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.135441  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:27.135451  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:27.135467  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:27.206016  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:27.206039  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:27.220873  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:27.220894  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:27.276309  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:27.269235   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.269791   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271364   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271799   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.273317   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:27.269235   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.269791   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271364   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271799   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.273317   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:27.276320  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:27.276335  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:27.341398  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:27.341421  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:29.872391  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:29.883459  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:29.883531  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:29.909713  109844 cri.go:89] found id: ""
	I1002 20:58:29.909729  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.909748  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:29.909755  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:29.909806  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:29.934338  109844 cri.go:89] found id: ""
	I1002 20:58:29.934354  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.934360  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:29.934365  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:29.934409  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:29.961900  109844 cri.go:89] found id: ""
	I1002 20:58:29.961917  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.961926  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:29.961932  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:29.961998  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:29.988238  109844 cri.go:89] found id: ""
	I1002 20:58:29.988253  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.988260  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:29.988265  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:29.988328  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:30.013598  109844 cri.go:89] found id: ""
	I1002 20:58:30.013613  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.013619  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:30.013624  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:30.013674  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:30.040799  109844 cri.go:89] found id: ""
	I1002 20:58:30.040817  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.040824  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:30.040829  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:30.040875  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:30.067159  109844 cri.go:89] found id: ""
	I1002 20:58:30.067174  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.067180  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:30.067187  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:30.067199  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:30.081264  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:30.081282  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:30.136411  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:30.129335   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.129861   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131445   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131865   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.133370   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:30.129335   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.129861   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131445   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131865   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.133370   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:30.136422  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:30.136436  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:30.198567  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:30.198599  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:30.226466  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:30.226488  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:32.794266  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:32.805593  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:32.805643  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:32.832000  109844 cri.go:89] found id: ""
	I1002 20:58:32.832015  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.832022  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:32.832027  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:32.832072  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:32.858662  109844 cri.go:89] found id: ""
	I1002 20:58:32.858680  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.858687  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:32.858691  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:32.858758  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:32.884652  109844 cri.go:89] found id: ""
	I1002 20:58:32.884671  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.884679  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:32.884686  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:32.884767  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:32.911548  109844 cri.go:89] found id: ""
	I1002 20:58:32.911571  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.911578  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:32.911583  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:32.911631  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:32.939319  109844 cri.go:89] found id: ""
	I1002 20:58:32.939335  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.939343  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:32.939347  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:32.939396  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:32.965654  109844 cri.go:89] found id: ""
	I1002 20:58:32.965670  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.965677  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:32.965681  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:32.965750  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:32.991821  109844 cri.go:89] found id: ""
	I1002 20:58:32.991837  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.991843  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:32.991851  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:32.991861  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:33.059096  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:33.059118  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:33.074520  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:33.074536  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:33.130853  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:33.124022   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.124509   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126111   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126586   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.128121   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:33.124022   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.124509   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126111   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126586   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.128121   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:33.130867  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:33.130881  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:33.196122  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:33.196146  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:35.728638  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:35.739628  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:35.739676  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:35.764726  109844 cri.go:89] found id: ""
	I1002 20:58:35.764760  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.764771  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:35.764777  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:35.764823  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:35.791011  109844 cri.go:89] found id: ""
	I1002 20:58:35.791026  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.791032  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:35.791037  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:35.791082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:35.817209  109844 cri.go:89] found id: ""
	I1002 20:58:35.817225  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.817231  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:35.817236  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:35.817281  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:35.842125  109844 cri.go:89] found id: ""
	I1002 20:58:35.842139  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.842145  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:35.842154  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:35.842200  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:35.867608  109844 cri.go:89] found id: ""
	I1002 20:58:35.867625  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.867631  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:35.867636  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:35.867681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:35.893798  109844 cri.go:89] found id: ""
	I1002 20:58:35.893813  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.893819  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:35.893824  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:35.893881  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:35.920822  109844 cri.go:89] found id: ""
	I1002 20:58:35.920837  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.920843  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:35.920851  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:35.920862  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:35.982786  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:35.982809  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:36.012445  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:36.012461  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:36.079729  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:36.079764  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:36.094119  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:36.094139  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:36.149838  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:36.142929   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.143480   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145076   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145533   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.147087   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:36.142929   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.143480   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145076   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145533   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.147087   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:38.650569  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:38.661345  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:38.661406  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:38.687690  109844 cri.go:89] found id: ""
	I1002 20:58:38.687709  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.687719  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:38.687729  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:38.687800  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:38.712812  109844 cri.go:89] found id: ""
	I1002 20:58:38.712830  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.712840  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:38.712846  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:38.712897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:38.738922  109844 cri.go:89] found id: ""
	I1002 20:58:38.738938  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.738945  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:38.738951  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:38.739014  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:38.766166  109844 cri.go:89] found id: ""
	I1002 20:58:38.766184  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.766191  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:38.766201  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:38.766259  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:38.793662  109844 cri.go:89] found id: ""
	I1002 20:58:38.793679  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.793687  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:38.793692  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:38.793758  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:38.820204  109844 cri.go:89] found id: ""
	I1002 20:58:38.820225  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.820233  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:38.820242  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:38.820301  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:38.846100  109844 cri.go:89] found id: ""
	I1002 20:58:38.846116  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.846122  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:38.846130  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:38.846143  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:38.912234  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:38.912257  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:38.926642  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:38.926661  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:38.983128  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:38.975680   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.976323   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.977925   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.978355   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.979926   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:38.975680   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.976323   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.977925   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.978355   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.979926   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:38.983140  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:38.983151  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:39.042170  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:39.042192  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:41.573431  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:41.584132  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:41.584179  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:41.610465  109844 cri.go:89] found id: ""
	I1002 20:58:41.610490  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.610500  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:41.610507  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:41.610571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:41.636463  109844 cri.go:89] found id: ""
	I1002 20:58:41.636481  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.636488  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:41.636493  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:41.636544  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:41.663306  109844 cri.go:89] found id: ""
	I1002 20:58:41.663324  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.663334  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:41.663340  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:41.663389  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:41.689945  109844 cri.go:89] found id: ""
	I1002 20:58:41.689963  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.689970  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:41.689975  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:41.690030  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:41.716483  109844 cri.go:89] found id: ""
	I1002 20:58:41.716498  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.716511  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:41.716515  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:41.716563  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:41.741653  109844 cri.go:89] found id: ""
	I1002 20:58:41.741670  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.741677  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:41.741682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:41.741728  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:41.768401  109844 cri.go:89] found id: ""
	I1002 20:58:41.768418  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.768425  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:41.768433  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:41.768444  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:41.825098  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:41.818285   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.818820   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820386   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820857   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.822413   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:41.818285   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.818820   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820386   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820857   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.822413   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:41.825108  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:41.825120  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:41.885569  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:41.885592  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:41.914823  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:41.914840  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:41.982285  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:41.982309  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:44.498020  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:44.508926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:44.508975  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:44.534766  109844 cri.go:89] found id: ""
	I1002 20:58:44.534783  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.534791  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:44.534797  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:44.534849  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:44.561400  109844 cri.go:89] found id: ""
	I1002 20:58:44.561418  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.561425  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:44.561429  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:44.561481  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:44.587621  109844 cri.go:89] found id: ""
	I1002 20:58:44.587638  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.587644  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:44.587649  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:44.587696  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:44.612688  109844 cri.go:89] found id: ""
	I1002 20:58:44.612703  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.612709  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:44.612717  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:44.612784  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:44.639713  109844 cri.go:89] found id: ""
	I1002 20:58:44.639728  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.639755  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:44.639763  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:44.639821  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:44.666252  109844 cri.go:89] found id: ""
	I1002 20:58:44.666271  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.666278  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:44.666283  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:44.666330  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:44.692295  109844 cri.go:89] found id: ""
	I1002 20:58:44.692311  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.692318  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:44.692326  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:44.692336  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:44.763438  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:44.763462  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:44.777919  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:44.777938  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:44.833114  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:44.826286   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.826821   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828377   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828833   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.830344   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:44.826286   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.826821   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828377   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828833   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.830344   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:44.833126  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:44.833138  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:44.893410  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:44.893436  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:47.425929  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:47.437727  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:47.437800  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:47.465106  109844 cri.go:89] found id: ""
	I1002 20:58:47.465125  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.465135  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:47.465141  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:47.465202  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:47.492450  109844 cri.go:89] found id: ""
	I1002 20:58:47.492469  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.492477  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:47.492487  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:47.492548  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:47.518249  109844 cri.go:89] found id: ""
	I1002 20:58:47.518266  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.518273  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:47.518280  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:47.518329  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:47.546009  109844 cri.go:89] found id: ""
	I1002 20:58:47.546026  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.546035  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:47.546040  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:47.546095  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:47.571969  109844 cri.go:89] found id: ""
	I1002 20:58:47.571984  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.571991  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:47.571995  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:47.572044  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:47.598332  109844 cri.go:89] found id: ""
	I1002 20:58:47.598352  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.598362  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:47.598371  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:47.598433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:47.624909  109844 cri.go:89] found id: ""
	I1002 20:58:47.624923  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.624932  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:47.624942  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:47.624955  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:47.682066  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:47.675019   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.675538   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677178   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677660   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.679133   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:47.675019   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.675538   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677178   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677660   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.679133   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:47.682078  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:47.682089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:47.742340  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:47.742363  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:47.772411  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:47.772428  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:47.841816  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:47.841839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:50.357907  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:50.368776  109844 kubeadm.go:601] duration metric: took 4m2.902167912s to restartPrimaryControlPlane
	W1002 20:58:50.368863  109844 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 20:58:50.368929  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:58:50.818759  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:58:50.831475  109844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:58:50.839597  109844 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:58:50.839643  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:58:50.847290  109844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:58:50.847300  109844 kubeadm.go:157] found existing configuration files:
	
	I1002 20:58:50.847341  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:58:50.854889  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:58:50.854928  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:58:50.862239  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:58:50.869705  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:58:50.869763  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:58:50.877993  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:58:50.885836  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:58:50.885887  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:58:50.893993  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:58:50.902316  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:58:50.902371  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:58:50.910549  109844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:58:50.946945  109844 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:58:50.946991  109844 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:58:50.966485  109844 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:58:50.966578  109844 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:58:50.966620  109844 kubeadm.go:318] OS: Linux
	I1002 20:58:50.966677  109844 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:58:50.966753  109844 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:58:50.966809  109844 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:58:50.966867  109844 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:58:50.966933  109844 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:58:50.966988  109844 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:58:50.967043  109844 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:58:50.967090  109844 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:58:51.025471  109844 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:58:51.025621  109844 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:58:51.025764  109844 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:58:51.032580  109844 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:58:51.036477  109844 out.go:252]   - Generating certificates and keys ...
	I1002 20:58:51.036579  109844 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:58:51.036655  109844 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:58:51.036755  109844 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:58:51.036828  109844 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:58:51.036907  109844 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:58:51.036961  109844 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:58:51.037039  109844 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:58:51.037113  109844 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:58:51.037183  109844 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:58:51.037249  109844 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:58:51.037279  109844 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:58:51.037325  109844 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:58:51.187682  109844 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:58:51.260672  109844 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:58:51.923940  109844 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:58:51.962992  109844 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:58:52.022920  109844 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:58:52.023298  109844 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:58:52.025586  109844 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:58:52.027495  109844 out.go:252]   - Booting up control plane ...
	I1002 20:58:52.027608  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:58:52.027713  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:58:52.027804  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:58:52.042406  109844 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:58:52.042511  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:58:52.049022  109844 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:58:52.049337  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:58:52.049378  109844 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:58:52.155568  109844 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:58:52.155766  109844 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:58:53.156432  109844 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000945383s
	I1002 20:58:53.159662  109844 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:58:53.159797  109844 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:58:53.159937  109844 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:58:53.160043  109844 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:02:53.160214  109844 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000318497s
	I1002 21:02:53.160391  109844 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00035696s
	I1002 21:02:53.160519  109844 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000784779s
	I1002 21:02:53.160527  109844 kubeadm.go:318] 
	I1002 21:02:53.160620  109844 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:02:53.160688  109844 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:02:53.160785  109844 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:02:53.160862  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:02:53.160927  109844 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:02:53.161001  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:02:53.161004  109844 kubeadm.go:318] 
	I1002 21:02:53.164399  109844 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:02:53.164524  109844 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:02:53.165091  109844 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 21:02:53.165168  109844 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:02:53.165349  109844 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000945383s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000318497s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00035696s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000784779s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:02:53.165441  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:02:53.609874  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:02:53.623007  109844 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:02:53.623061  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:02:53.631223  109844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:02:53.631235  109844 kubeadm.go:157] found existing configuration files:
	
	I1002 21:02:53.631283  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 21:02:53.639093  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:02:53.639137  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:02:53.647228  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 21:02:53.655566  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:02:53.655610  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:02:53.663430  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 21:02:53.671338  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:02:53.671390  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:02:53.679032  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 21:02:53.686944  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:02:53.686993  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:02:53.694170  109844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:02:53.730792  109844 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:02:53.730837  109844 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:02:53.752207  109844 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:02:53.752260  109844 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:02:53.752295  109844 kubeadm.go:318] OS: Linux
	I1002 21:02:53.752337  109844 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:02:53.752403  109844 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:02:53.752440  109844 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:02:53.752485  109844 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:02:53.752585  109844 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:02:53.752641  109844 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:02:53.752685  109844 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:02:53.752720  109844 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:02:53.811160  109844 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:02:53.811301  109844 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:02:53.811426  109844 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:02:53.817686  109844 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:02:53.822264  109844 out.go:252]   - Generating certificates and keys ...
	I1002 21:02:53.822366  109844 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:02:53.822429  109844 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:02:53.822500  109844 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:02:53.822558  109844 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:02:53.822649  109844 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:02:53.822721  109844 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:02:53.822797  109844 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:02:53.822883  109844 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:02:53.822984  109844 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:02:53.823080  109844 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:02:53.823129  109844 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:02:53.823200  109844 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:02:54.089650  109844 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:02:54.165018  109844 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:02:54.351562  109844 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:02:54.606636  109844 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:02:54.799514  109844 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:02:54.799929  109844 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:02:54.802220  109844 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:02:54.804402  109844 out.go:252]   - Booting up control plane ...
	I1002 21:02:54.804516  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:02:54.804616  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:02:54.804724  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:02:54.818368  109844 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:02:54.818509  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:02:54.825531  109844 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:02:54.826683  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:02:54.826734  109844 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:02:54.927546  109844 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:02:54.927690  109844 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:02:55.429241  109844 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.893032ms
	I1002 21:02:55.432296  109844 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:02:55.432407  109844 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 21:02:55.432483  109844 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:02:55.432583  109844 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:06:55.432671  109844 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	I1002 21:06:55.432869  109844 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	I1002 21:06:55.432961  109844 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	I1002 21:06:55.432968  109844 kubeadm.go:318] 
	I1002 21:06:55.433037  109844 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:06:55.433100  109844 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:06:55.433168  109844 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:06:55.433259  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:06:55.433328  109844 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:06:55.433419  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:06:55.433434  109844 kubeadm.go:318] 
	I1002 21:06:55.436835  109844 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:06:55.436949  109844 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:06:55.437474  109844 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:06:55.437568  109844 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:06:55.437594  109844 kubeadm.go:402] duration metric: took 12m8.007755847s to StartCluster
	I1002 21:06:55.437641  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:06:55.437710  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:06:55.464382  109844 cri.go:89] found id: ""
	I1002 21:06:55.464398  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.464404  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:06:55.464409  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:06:55.464469  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:06:55.490606  109844 cri.go:89] found id: ""
	I1002 21:06:55.490623  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.490633  109844 logs.go:284] No container was found matching "etcd"
	I1002 21:06:55.490638  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:06:55.490702  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:06:55.516529  109844 cri.go:89] found id: ""
	I1002 21:06:55.516547  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.516560  109844 logs.go:284] No container was found matching "coredns"
	I1002 21:06:55.516565  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:06:55.516631  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:06:55.542896  109844 cri.go:89] found id: ""
	I1002 21:06:55.542913  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.542919  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:06:55.542926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:06:55.542976  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:06:55.570192  109844 cri.go:89] found id: ""
	I1002 21:06:55.570206  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.570212  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:06:55.570217  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:06:55.570263  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:06:55.596069  109844 cri.go:89] found id: ""
	I1002 21:06:55.596092  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.596102  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:06:55.596107  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:06:55.596157  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:06:55.621555  109844 cri.go:89] found id: ""
	I1002 21:06:55.621572  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.621579  109844 logs.go:284] No container was found matching "kindnet"
	I1002 21:06:55.621587  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 21:06:55.621600  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:06:55.635371  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:06:55.635389  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:06:55.691316  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:06:55.684497   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.685072   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.686619   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.687074   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.688662   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:06:55.684497   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.685072   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.686619   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.687074   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.688662   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:06:55.691337  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:06:55.691347  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:06:55.755862  109844 logs.go:123] Gathering logs for container status ...
	I1002 21:06:55.755886  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:06:55.784730  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 21:06:55.784767  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1002 21:06:55.854494  109844 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:06:55.854545  109844 out.go:285] * 
	W1002 21:06:55.854631  109844 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:06:55.854657  109844 out.go:285] * 
	W1002 21:06:55.856372  109844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:06:55.860308  109844 out.go:203] 
	W1002 21:06:55.861642  109844 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:06:55.861662  109844 out.go:285] * 
	I1002 21:06:55.863851  109844 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:06:50 functional-012915 crio[5820]: time="2025-10-02T21:06:50.23149511Z" level=info msg="createCtr: removing container a11ad10a6facd115efda51f95be01c7d4b18e85a7266a175f7ba04020606f46a" id=627fdba6-7b17-4f70-a363-cc117843eeba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:50 functional-012915 crio[5820]: time="2025-10-02T21:06:50.231548884Z" level=info msg="createCtr: deleting container a11ad10a6facd115efda51f95be01c7d4b18e85a7266a175f7ba04020606f46a from storage" id=627fdba6-7b17-4f70-a363-cc117843eeba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:50 functional-012915 crio[5820]: time="2025-10-02T21:06:50.233892054Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-012915_kube-system_7482f03c4ea15852236655655d7fae39_0" id=627fdba6-7b17-4f70-a363-cc117843eeba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.205556556Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=1de1a49a-6746-43c3-8fdb-9dadd10c7f27 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.206381729Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=6bcef6bf-e782-40ad-bfef-f18dddb9b25a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.20714502Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-012915/kube-scheduler" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.207343617Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.210669982Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.211138693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.229548778Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.230898309Z" level=info msg="createCtr: deleting container ID f1b43a114d12d7820a2c43e3fe1c710596a426853c1dbefd213cefc8088ed213 from idIndex" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.230945457Z" level=info msg="createCtr: removing container f1b43a114d12d7820a2c43e3fe1c710596a426853c1dbefd213cefc8088ed213" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.230976669Z" level=info msg="createCtr: deleting container f1b43a114d12d7820a2c43e3fe1c710596a426853c1dbefd213cefc8088ed213 from storage" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:53 functional-012915 crio[5820]: time="2025-10-02T21:06:53.232965467Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-012915_kube-system_8a66ab49d7c80b396ab0e8b46c39b696_0" id=15191aa0-8978-403b-a4ff-ccfbbb6beb0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.204652395Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=946c0224-2954-4597-abd9-48c739fd05e4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.205506999Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=f1f8205b-13ab-48d1-89be-5ddfe7f89bfc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.206240102Z" level=info msg="Creating container: kube-system/etcd-functional-012915/etcd" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.206447331Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.210283193Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.210863417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.228124139Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.229573851Z" level=info msg="createCtr: deleting container ID 1beefe15b63b796e652c01ac1f61b13690321cfccbd88674e7a5b2a56d2579c4 from idIndex" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.229621183Z" level=info msg="createCtr: removing container 1beefe15b63b796e652c01ac1f61b13690321cfccbd88674e7a5b2a56d2579c4" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.229659341Z" level=info msg="createCtr: deleting container 1beefe15b63b796e652c01ac1f61b13690321cfccbd88674e7a5b2a56d2579c4 from storage" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.231972859Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-012915_kube-system_d8a261ecdc32dae77705c4d6c0276f2f_0" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:06:58.873385   15891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:58.873900   15891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:58.875635   15891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:58.876163   15891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:58.877813   15891 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:06:58 up  2:49,  0 user,  load average: 0.16, 0.07, 0.19
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:06:50 functional-012915 kubelet[14964]:         container kube-apiserver start failed in pod kube-apiserver-functional-012915_kube-system(7482f03c4ea15852236655655d7fae39): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:06:50 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:06:50 functional-012915 kubelet[14964]: E1002 21:06:50.234356   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-012915" podUID="7482f03c4ea15852236655655d7fae39"
	Oct 02 21:06:51 functional-012915 kubelet[14964]: E1002 21:06:51.349849   14964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac86d10977047  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,LastTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-012915,}"
	Oct 02 21:06:51 functional-012915 kubelet[14964]: E1002 21:06:51.829284   14964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 21:06:51 functional-012915 kubelet[14964]: I1002 21:06:51.984192   14964 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 21:06:51 functional-012915 kubelet[14964]: E1002 21:06:51.984565   14964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 21:06:53 functional-012915 kubelet[14964]: E1002 21:06:53.205148   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:06:53 functional-012915 kubelet[14964]: E1002 21:06:53.233255   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:06:53 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:06:53 functional-012915 kubelet[14964]:  > podSandboxID="8fcd09580c94c358972341d218f18641fb01c2881f93974b0a738c79d068fdb3"
	Oct 02 21:06:53 functional-012915 kubelet[14964]: E1002 21:06:53.233360   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:06:53 functional-012915 kubelet[14964]:         container kube-scheduler start failed in pod kube-scheduler-functional-012915_kube-system(8a66ab49d7c80b396ab0e8b46c39b696): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:06:53 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:06:53 functional-012915 kubelet[14964]: E1002 21:06:53.233399   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-012915" podUID="8a66ab49d7c80b396ab0e8b46c39b696"
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.204278   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.218859   14964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-012915\" not found"
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.232216   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:06:55 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:06:55 functional-012915 kubelet[14964]:  > podSandboxID="0a35d159a682c6cd7da21a9fb2e3efef99f6f6c3f06af6071bd80e1de599842e"
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.232329   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:06:55 functional-012915 kubelet[14964]:         container etcd start failed in pod etcd-functional-012915_kube-system(d8a261ecdc32dae77705c4d6c0276f2f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:06:55 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.232366   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-012915" podUID="d8a261ecdc32dae77705c4d6c0276f2f"
	Oct 02 21:06:58 functional-012915 kubelet[14964]: E1002 21:06:58.830030   14964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (298.294656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (1.85s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-012915 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-012915 apply -f testdata/invalidsvc.yaml: exit status 1 (64.753945ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-012915 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-012915 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-012915 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-012915 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-012915 --alsologtostderr -v=1] stderr:
I1002 21:07:14.501162  132017 out.go:360] Setting OutFile to fd 1 ...
I1002 21:07:14.501304  132017 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:14.501314  132017 out.go:374] Setting ErrFile to fd 2...
I1002 21:07:14.501319  132017 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:14.501566  132017 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
I1002 21:07:14.501905  132017 mustload.go:65] Loading cluster: functional-012915
I1002 21:07:14.502328  132017 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:14.502728  132017 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
I1002 21:07:14.523144  132017 host.go:66] Checking if "functional-012915" exists ...
I1002 21:07:14.523429  132017 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 21:07:14.580687  132017 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:07:14.571341915 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 21:07:14.580815  132017 api_server.go:166] Checking apiserver status ...
I1002 21:07:14.580864  132017 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 21:07:14.580906  132017 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
I1002 21:07:14.598359  132017 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
W1002 21:07:14.703979  132017 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1002 21:07:14.705872  132017 out.go:179] * The control-plane node functional-012915 apiserver is not running: (state=Stopped)
I1002 21:07:14.707203  132017 out.go:179]   To start a cluster, run: "minikube start -p functional-012915"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (301.487785ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-012915 image ls                                                                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image     │ functional-012915 image save kicbase/echo-server:functional-012915 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image     │ functional-012915 image rm kicbase/echo-server:functional-012915 --alsologtostderr                                                                              │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh -- ls -la /mount-9p                                                                                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image     │ functional-012915 image ls                                                                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ image     │ functional-012915 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image     │ functional-012915 image save --daemon kicbase/echo-server:functional-012915 --alsologtostderr                                                                   │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ mount     │ -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount3 --alsologtostderr -v=1                                               │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh       │ functional-012915 ssh findmnt -T /mount1                                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ mount     │ -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount1 --alsologtostderr -v=1                                               │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ mount     │ -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount2 --alsologtostderr -v=1                                               │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh       │ functional-012915 ssh sudo cat /etc/ssl/certs/84100.pem                                                                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo cat /usr/share/ca-certificates/84100.pem                                                                                             │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh findmnt -T /mount1                                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo cat /etc/ssl/certs/841002.pem                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh findmnt -T /mount2                                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo cat /usr/share/ca-certificates/841002.pem                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh findmnt -T /mount3                                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ mount     │ -p functional-012915 --kill=true                                                                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-012915 --alsologtostderr -v=1                                                                                                  │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh       │ functional-012915 ssh sudo cat /etc/test/nested/copy/84100/hosts                                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:07:06
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:07:06.995028  127793 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:07:06.995116  127793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.995122  127793 out.go:374] Setting ErrFile to fd 2...
	I1002 21:07:06.995128  127793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.995487  127793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:07:06.995965  127793 out.go:368] Setting JSON to false
	I1002 21:07:06.996965  127793 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10168,"bootTime":1759429059,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:07:06.997080  127793 start.go:140] virtualization: kvm guest
	I1002 21:07:06.999028  127793 out.go:179] * [functional-012915] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 21:07:07.000503  127793 notify.go:220] Checking for updates...
	I1002 21:07:07.000539  127793 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:07:07.002031  127793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:07:07.003411  127793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:07:07.004900  127793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:07:07.006037  127793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:07:07.007128  127793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:07:07.008912  127793 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:07.009362  127793 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:07:07.034759  127793 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:07:07.034869  127793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:07:07.097804  127793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:07:07.08803788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:07:07.097914  127793 docker.go:318] overlay module found
	I1002 21:07:07.101324  127793 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 21:07:07.102629  127793 start.go:304] selected driver: docker
	I1002 21:07:07.102647  127793 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:07:07.102753  127793 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:07:07.104576  127793 out.go:203] 
	W1002 21:07:07.105751  127793 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 21:07:07.107027  127793 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.32623916Z" level=info msg="Checking image status: kicbase/echo-server:functional-012915" id=652a0cc7-96b4-4cc9-8770-7f90889ad5d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.352056366Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-012915" id=67615c5c-a803-418e-8082-ace67677acff name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.352207943Z" level=info msg="Image docker.io/kicbase/echo-server:functional-012915 not found" id=67615c5c-a803-418e-8082-ace67677acff name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.352266249Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-012915 found" id=67615c5c-a803-418e-8082-ace67677acff name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.3774175Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-012915" id=03fbb612-5013-42b0-9112-00cbe7a10b30 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.377551842Z" level=info msg="Image localhost/kicbase/echo-server:functional-012915 not found" id=03fbb612-5013-42b0-9112-00cbe7a10b30 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.377589602Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-012915 found" id=03fbb612-5013-42b0-9112-00cbe7a10b30 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.122395444Z" level=info msg="Checking image status: kicbase/echo-server:functional-012915" id=609c7946-812d-4f23-ac6b-eb4871d7fd4d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.150148722Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-012915" id=e5777318-5d36-425b-bafc-b2846988e349 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.15030666Z" level=info msg="Image docker.io/kicbase/echo-server:functional-012915 not found" id=e5777318-5d36-425b-bafc-b2846988e349 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.150356912Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-012915 found" id=e5777318-5d36-425b-bafc-b2846988e349 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.176254507Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-012915" id=0cfac42e-cb59-453e-b903-9005525a62f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.176383818Z" level=info msg="Image localhost/kicbase/echo-server:functional-012915 not found" id=0cfac42e-cb59-453e-b903-9005525a62f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.176415337Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-012915 found" id=0cfac42e-cb59-453e-b903-9005525a62f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.205147829Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=699b8931-988e-4b9f-8a6c-fc8b4bcc55ac name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.206204507Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e0dcae5d-4574-432e-8c24-ebc3abdfca4b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.207417759Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-012915/kube-apiserver" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.207646949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.212703465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.213358544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.229672236Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.231295319Z" level=info msg="createCtr: deleting container ID 39ffb395332f78455dbf35a6e7a05d6bf475503d305ffc3851e1d9eacd3f111e from idIndex" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.231345788Z" level=info msg="createCtr: removing container 39ffb395332f78455dbf35a6e7a05d6bf475503d305ffc3851e1d9eacd3f111e" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.231399172Z" level=info msg="createCtr: deleting container 39ffb395332f78455dbf35a6e7a05d6bf475503d305ffc3851e1d9eacd3f111e from storage" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.234646211Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-012915_kube-system_7482f03c4ea15852236655655d7fae39_0" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:07:15.705992   17989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:15.706638   17989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:15.708525   17989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:15.708995   17989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:15.710731   17989 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:07:15 up  2:49,  0 user,  load average: 1.54, 0.39, 0.29
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:07:08 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:08 functional-012915 kubelet[14964]: E1002 21:07:08.237406   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-012915" podUID="8a66ab49d7c80b396ab0e8b46c39b696"
	Oct 02 21:07:09 functional-012915 kubelet[14964]: E1002 21:07:09.808618   14964 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 21:07:11 functional-012915 kubelet[14964]: E1002 21:07:11.205836   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:07:11 functional-012915 kubelet[14964]: E1002 21:07:11.237468   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:07:11 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:11 functional-012915 kubelet[14964]:  > podSandboxID="78541c97616f3ec4e232f9ab35845168ea396e7284f2b19d4d8b8efd1c5094a2"
	Oct 02 21:07:11 functional-012915 kubelet[14964]: E1002 21:07:11.237611   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:07:11 functional-012915 kubelet[14964]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-012915_kube-system(7e750209f40bc1241cc38d19476e612c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:11 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:11 functional-012915 kubelet[14964]: E1002 21:07:11.237648   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-012915" podUID="7e750209f40bc1241cc38d19476e612c"
	Oct 02 21:07:11 functional-012915 kubelet[14964]: E1002 21:07:11.352873   14964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac86d10977047  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,LastTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-012915,}"
	Oct 02 21:07:12 functional-012915 kubelet[14964]: E1002 21:07:12.832640   14964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 21:07:12 functional-012915 kubelet[14964]: I1002 21:07:12.991217   14964 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 21:07:12 functional-012915 kubelet[14964]: E1002 21:07:12.991663   14964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 21:07:14 functional-012915 kubelet[14964]: E1002 21:07:14.029458   14964 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-012915&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 21:07:14 functional-012915 kubelet[14964]: E1002 21:07:14.204552   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:07:14 functional-012915 kubelet[14964]: E1002 21:07:14.235000   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:07:14 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:14 functional-012915 kubelet[14964]:  > podSandboxID="a129e9a2f94a7f43841dcb70e9f797b91d229fda437bd3abc02ab094cc4c3749"
	Oct 02 21:07:14 functional-012915 kubelet[14964]: E1002 21:07:14.235109   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:07:14 functional-012915 kubelet[14964]:         container kube-apiserver start failed in pod kube-apiserver-functional-012915_kube-system(7482f03c4ea15852236655655d7fae39): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:14 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:14 functional-012915 kubelet[14964]: E1002 21:07:14.235153   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-012915" podUID="7482f03c4ea15852236655655d7fae39"
	Oct 02 21:07:15 functional-012915 kubelet[14964]: E1002 21:07:15.220732   14964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-012915\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (305.48077ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 status: exit status 2 (348.812175ms)

                                                
                                                
-- stdout --
	functional-012915
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-amd64 -p functional-012915 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (348.908644ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Running,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-amd64 -p functional-012915 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 status -o json: exit status 2 (364.181996ms)

                                                
                                                
-- stdout --
	{"Name":"functional-012915","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-amd64 -p functional-012915 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (390.322731ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-012915 logs -n 25: (1.036870253s)
helpers_test.go:260: TestFunctional/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                       │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │ 02 Oct 25 20:54 UTC │
	│ kubectl │ functional-012915 kubectl -- --context functional-012915 get pods                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	│ start   │ -p functional-012915 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	│ cp      │ functional-012915 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ config  │ functional-012915 config unset cpus                                                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ service │ functional-012915 service list                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ config  │ functional-012915 config get cpus                                                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ config  │ functional-012915 config set cpus 2                                                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ config  │ functional-012915 config get cpus                                                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ config  │ functional-012915 config unset cpus                                                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh     │ functional-012915 ssh -n functional-012915 sudo cat /home/docker/cp-test.txt                                              │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ config  │ functional-012915 config get cpus                                                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ service │ functional-012915 service list -o json                                                                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh     │ functional-012915 ssh echo hello                                                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ cp      │ functional-012915 cp functional-012915:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd418601657/001/cp-test.txt │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ service │ functional-012915 service --namespace=default --https --url hello-node                                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh     │ functional-012915 ssh cat /etc/hostname                                                                                   │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh     │ functional-012915 ssh -n functional-012915 sudo cat /home/docker/cp-test.txt                                              │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ service │ functional-012915 service hello-node --url --format={{.IP}}                                                               │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ tunnel  │ functional-012915 tunnel --alsologtostderr                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ tunnel  │ functional-012915 tunnel --alsologtostderr                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ service │ functional-012915 service hello-node --url                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ cp      │ functional-012915 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ tunnel  │ functional-012915 tunnel --alsologtostderr                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:54:43
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:54:43.844587  109844 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:54:43.844861  109844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:43.844865  109844 out.go:374] Setting ErrFile to fd 2...
	I1002 20:54:43.844868  109844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:43.845038  109844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:54:43.845491  109844 out.go:368] Setting JSON to false
	I1002 20:54:43.846405  109844 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9425,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:54:43.846500  109844 start.go:140] virtualization: kvm guest
	I1002 20:54:43.848999  109844 out.go:179] * [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:54:43.850877  109844 notify.go:220] Checking for updates...
	I1002 20:54:43.850921  109844 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:54:43.852793  109844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:54:43.854834  109844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:54:43.856692  109844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:54:43.858365  109844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:54:43.860403  109844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:54:43.863103  109844 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:54:43.863204  109844 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:54:43.889469  109844 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:54:43.889551  109844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:43.945234  109844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:54:43.934776618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:54:43.945360  109844 docker.go:318] overlay module found
	I1002 20:54:43.947426  109844 out.go:179] * Using the docker driver based on existing profile
	I1002 20:54:43.949164  109844 start.go:304] selected driver: docker
	I1002 20:54:43.949174  109844 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:43.949277  109844 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:54:43.949355  109844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:44.006056  109844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:54:43.996347889 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:54:44.006730  109844 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:54:44.006766  109844 cni.go:84] Creating CNI manager for ""
	I1002 20:54:44.006828  109844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:54:44.006872  109844 start.go:348] cluster config:
	{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:44.008980  109844 out.go:179] * Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	I1002 20:54:44.010355  109844 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 20:54:44.011760  109844 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:54:44.012903  109844 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:54:44.012938  109844 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:54:44.012951  109844 cache.go:58] Caching tarball of preloaded images
	I1002 20:54:44.012993  109844 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:54:44.013033  109844 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:54:44.013038  109844 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:54:44.013135  109844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json ...
	I1002 20:54:44.033578  109844 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:54:44.033589  109844 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:54:44.033606  109844 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:54:44.033634  109844 start.go:360] acquireMachinesLock for functional-012915: {Name:mk05b0465db6f8234fcb55c21a78a37886923b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:54:44.033690  109844 start.go:364] duration metric: took 42.12µs to acquireMachinesLock for "functional-012915"
	I1002 20:54:44.033704  109844 start.go:96] Skipping create...Using existing machine configuration
	I1002 20:54:44.033708  109844 fix.go:54] fixHost starting: 
	I1002 20:54:44.033949  109844 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:54:44.051193  109844 fix.go:112] recreateIfNeeded on functional-012915: state=Running err=<nil>
	W1002 20:54:44.051212  109844 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 20:54:44.053363  109844 out.go:252] * Updating the running docker "functional-012915" container ...
	I1002 20:54:44.053388  109844 machine.go:93] provisionDockerMachine start ...
	I1002 20:54:44.053449  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.071022  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.071263  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.071270  109844 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:54:44.215777  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:54:44.215796  109844 ubuntu.go:182] provisioning hostname "functional-012915"
	I1002 20:54:44.215846  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.233786  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.234003  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.234012  109844 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-012915 && echo "functional-012915" | sudo tee /etc/hostname
	I1002 20:54:44.386648  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:54:44.386732  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.405002  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.405287  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.405300  109844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-012915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-012915/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-012915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:54:44.550595  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:54:44.550613  109844 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 20:54:44.550630  109844 ubuntu.go:190] setting up certificates
	I1002 20:54:44.550637  109844 provision.go:84] configureAuth start
	I1002 20:54:44.550684  109844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:54:44.568931  109844 provision.go:143] copyHostCerts
	I1002 20:54:44.568985  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 20:54:44.569001  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:54:44.569078  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 20:54:44.569204  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 20:54:44.569210  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:54:44.569250  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 20:54:44.569359  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 20:54:44.569365  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:54:44.569398  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 20:54:44.569559  109844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.functional-012915 san=[127.0.0.1 192.168.49.2 functional-012915 localhost minikube]
	I1002 20:54:44.708488  109844 provision.go:177] copyRemoteCerts
	I1002 20:54:44.708542  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:54:44.708581  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.726299  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:44.828230  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:54:44.845801  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:54:44.864647  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:54:44.886083  109844 provision.go:87] duration metric: took 335.431145ms to configureAuth
	I1002 20:54:44.886105  109844 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:54:44.886322  109844 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:54:44.886449  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.904652  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.904873  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.904882  109844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:54:45.179966  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:54:45.179982  109844 machine.go:96] duration metric: took 1.12658745s to provisionDockerMachine
	I1002 20:54:45.179993  109844 start.go:293] postStartSetup for "functional-012915" (driver="docker")
	I1002 20:54:45.180006  109844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:54:45.180072  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:54:45.180106  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.198206  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.300487  109844 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:54:45.304200  109844 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:54:45.304220  109844 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:54:45.304236  109844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 20:54:45.304298  109844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 20:54:45.304376  109844 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 20:54:45.304448  109844 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> hosts in /etc/test/nested/copy/84100
	I1002 20:54:45.304489  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/84100
	I1002 20:54:45.312033  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:54:45.329488  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts --> /etc/test/nested/copy/84100/hosts (40 bytes)
	I1002 20:54:45.347685  109844 start.go:296] duration metric: took 167.67425ms for postStartSetup
	I1002 20:54:45.347776  109844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:54:45.347829  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.365819  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.465348  109844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:54:45.470042  109844 fix.go:56] duration metric: took 1.436324828s for fixHost
	I1002 20:54:45.470060  109844 start.go:83] releasing machines lock for "functional-012915", held for 1.436363927s
	I1002 20:54:45.470140  109844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:54:45.487689  109844 ssh_runner.go:195] Run: cat /version.json
	I1002 20:54:45.487729  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.487802  109844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:54:45.487851  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.505570  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.507416  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.673212  109844 ssh_runner.go:195] Run: systemctl --version
	I1002 20:54:45.680090  109844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:54:45.716457  109844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:54:45.721126  109844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:54:45.721199  109844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:54:45.729223  109844 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:54:45.729241  109844 start.go:495] detecting cgroup driver to use...
	I1002 20:54:45.729276  109844 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:54:45.729332  109844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:54:45.744221  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:54:45.757221  109844 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:54:45.757262  109844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:54:45.772166  109844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:54:45.785276  109844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:54:45.871303  109844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:54:45.959396  109844 docker.go:234] disabling docker service ...
	I1002 20:54:45.959460  109844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:54:45.974048  109844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:54:45.986376  109844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:54:46.071815  109844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:54:46.159773  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:54:46.172020  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:54:46.186483  109844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:54:46.186540  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.195504  109844 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:54:46.195591  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.205033  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.213732  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.222589  109844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:54:46.230603  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.239758  109844 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.248194  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.256956  109844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:54:46.264263  109844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:54:46.271577  109844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:54:46.354483  109844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:54:46.464818  109844 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:54:46.464871  109844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:54:46.468860  109844 start.go:563] Will wait 60s for crictl version
	I1002 20:54:46.468905  109844 ssh_runner.go:195] Run: which crictl
	I1002 20:54:46.472439  109844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:54:46.496177  109844 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:54:46.496237  109844 ssh_runner.go:195] Run: crio --version
	I1002 20:54:46.524348  109844 ssh_runner.go:195] Run: crio --version
	I1002 20:54:46.554038  109844 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:54:46.555482  109844 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:54:46.572825  109844 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:54:46.579140  109844 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 20:54:46.580455  109844 kubeadm.go:883] updating cluster {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:54:46.580599  109844 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:54:46.580680  109844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:54:46.615204  109844 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:54:46.615216  109844 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:54:46.615259  109844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:54:46.641403  109844 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:54:46.641428  109844 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:54:46.641435  109844 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:54:46.641523  109844 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:54:46.641593  109844 ssh_runner.go:195] Run: crio config
	I1002 20:54:46.685535  109844 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 20:54:46.685558  109844 cni.go:84] Creating CNI manager for ""
	I1002 20:54:46.685570  109844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:54:46.685580  109844 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:54:46.685603  109844 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-012915 NodeName:functional-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:54:46.685708  109844 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-012915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:54:46.685786  109844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:54:46.694168  109844 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:54:46.694220  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:54:46.701920  109844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:54:46.714502  109844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:54:46.726979  109844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 20:54:46.739184  109844 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:54:46.742937  109844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:54:46.828267  109844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:54:46.841290  109844 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915 for IP: 192.168.49.2
	I1002 20:54:46.841302  109844 certs.go:195] generating shared ca certs ...
	I1002 20:54:46.841315  109844 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:54:46.841439  109844 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 20:54:46.841480  109844 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 20:54:46.841486  109844 certs.go:257] generating profile certs ...
	I1002 20:54:46.841556  109844 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key
	I1002 20:54:46.841595  109844 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645
	I1002 20:54:46.841625  109844 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key
	I1002 20:54:46.841728  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 20:54:46.841789  109844 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 20:54:46.841795  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:54:46.841816  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:54:46.841847  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:54:46.841870  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 20:54:46.841921  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:54:46.842546  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:54:46.860833  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:54:46.878996  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:54:46.897504  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:54:46.914816  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:54:46.931903  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:54:46.948901  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:54:46.965859  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:54:46.982982  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 20:54:47.000600  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 20:54:47.018108  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:54:47.035448  109844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:54:47.047886  109844 ssh_runner.go:195] Run: openssl version
	I1002 20:54:47.053789  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 20:54:47.062187  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.066098  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.066148  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.100024  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 20:54:47.108632  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 20:54:47.118249  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.122176  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.122226  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.156807  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:54:47.165260  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:54:47.173954  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.177825  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.177879  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.212057  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:54:47.220716  109844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:54:47.224961  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:54:47.259305  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:54:47.293091  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:54:47.327486  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:54:47.361854  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:54:47.395871  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:54:47.429860  109844 kubeadm.go:400] StartCluster: {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:47.429950  109844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:54:47.429996  109844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:54:47.458514  109844 cri.go:89] found id: ""
	I1002 20:54:47.458565  109844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:54:47.466572  109844 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:54:47.466595  109844 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:54:47.466642  109844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:54:47.473967  109844 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.474578  109844 kubeconfig.go:125] found "functional-012915" server: "https://192.168.49.2:8441"
	I1002 20:54:47.476054  109844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:54:47.483705  109844 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 20:40:16.332502550 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 20:54:46.736875917 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 20:54:47.483713  109844 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:54:47.483724  109844 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 20:54:47.483782  109844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:54:47.509815  109844 cri.go:89] found id: ""
	I1002 20:54:47.509892  109844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:54:47.553124  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:54:47.561262  109844 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 20:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 20:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 20:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 20:44 /etc/kubernetes/scheduler.conf
	
	I1002 20:54:47.561322  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:54:47.569534  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:54:47.577441  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.577491  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:54:47.585032  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:54:47.592533  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.592581  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:54:47.600040  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:54:47.607328  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.607365  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:54:47.614787  109844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:54:47.622401  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:47.663022  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.396196  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.576311  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.625411  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.679287  109844 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:54:48.679369  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:49.179574  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:49.679973  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:50.180317  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:50.680215  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:51.179826  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:51.679618  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:52.180390  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:52.679884  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:53.180480  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:53.679973  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:54.180264  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:54.679704  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:55.179880  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:55.679789  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:56.179784  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:56.679611  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:57.179499  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:57.680068  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:58.179593  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:58.680342  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:59.180363  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:59.679719  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:00.180464  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:00.680219  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:01.179572  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:01.679989  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:02.179867  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:02.680465  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:03.179787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:03.680167  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:04.179791  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:04.679910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:05.179712  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:05.680091  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:06.179473  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:06.680424  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:07.179668  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:07.680232  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:08.180357  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:08.679960  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:09.180406  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:09.679893  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:10.180470  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:10.680102  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:11.180344  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:11.679766  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:12.180348  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:12.679643  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:13.180121  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:13.679815  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:14.179492  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:14.679526  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:15.180454  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:15.679641  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:16.180481  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:16.679596  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:17.179991  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:17.680447  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:18.179814  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:18.679604  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:19.180037  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:19.680355  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:20.180349  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:20.679595  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:21.179952  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:21.680267  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:22.179901  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:22.680376  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:23.180156  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:23.679931  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:24.180000  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:24.680128  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:25.179481  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:25.680099  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:26.180243  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:26.680414  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:27.180290  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:27.680286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:28.179866  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:28.680103  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:29.180483  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:29.680117  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:30.179477  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:30.679634  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:31.180114  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:31.680389  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:32.179833  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:32.679848  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:33.180002  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:33.679520  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:34.180220  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:34.679624  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:35.179932  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:35.679910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:36.180365  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:36.679590  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:37.179548  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:37.680243  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:38.179674  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:38.680191  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:39.179865  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:39.680176  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:40.179534  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:40.679913  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:41.180457  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:41.679626  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:42.179639  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:42.679943  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:43.179573  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:43.680221  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:44.180342  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:44.679876  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:45.180254  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:45.679532  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:46.180286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:46.679433  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:47.179977  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:47.679540  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:48.180382  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:48.679912  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:48.679971  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:48.706989  109844 cri.go:89] found id: ""
	I1002 20:55:48.707014  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.707020  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:48.707025  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:48.707071  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:48.733283  109844 cri.go:89] found id: ""
	I1002 20:55:48.733299  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.733306  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:48.733311  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:48.733361  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:48.761228  109844 cri.go:89] found id: ""
	I1002 20:55:48.761245  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.761250  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:48.761256  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:48.761313  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:48.788501  109844 cri.go:89] found id: ""
	I1002 20:55:48.788516  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.788522  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:48.788527  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:48.788579  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:48.814616  109844 cri.go:89] found id: ""
	I1002 20:55:48.814636  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.814646  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:48.814651  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:48.814703  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:48.841518  109844 cri.go:89] found id: ""
	I1002 20:55:48.841538  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.841548  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:48.841555  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:48.841624  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:48.869254  109844 cri.go:89] found id: ""
	I1002 20:55:48.869278  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.869288  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:48.869311  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:48.869335  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:48.883919  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:48.883937  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:48.941687  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:48.933979    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.935001    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.936618    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.937054    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.938614    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:48.933979    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.935001    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.936618    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.937054    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.938614    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:48.941698  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:48.941710  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:49.007787  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:49.007810  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:49.038133  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:49.038157  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:51.609461  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:51.620229  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:51.620296  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:51.647003  109844 cri.go:89] found id: ""
	I1002 20:55:51.647022  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.647028  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:51.647033  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:51.647087  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:51.673376  109844 cri.go:89] found id: ""
	I1002 20:55:51.673394  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.673402  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:51.673408  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:51.673467  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:51.700685  109844 cri.go:89] found id: ""
	I1002 20:55:51.700701  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.700719  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:51.700724  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:51.700792  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:51.726660  109844 cri.go:89] found id: ""
	I1002 20:55:51.726677  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.726684  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:51.726689  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:51.726762  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:51.753630  109844 cri.go:89] found id: ""
	I1002 20:55:51.753646  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.753652  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:51.753657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:51.753750  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:51.779127  109844 cri.go:89] found id: ""
	I1002 20:55:51.779146  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.779155  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:51.779161  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:51.779235  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:51.805960  109844 cri.go:89] found id: ""
	I1002 20:55:51.805979  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.805988  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:51.805997  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:51.806006  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:51.835916  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:51.835939  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:51.905127  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:51.905159  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:51.920189  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:51.920209  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:51.976010  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:51.969042    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.969686    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971200    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971624    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.973116    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:51.969042    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.969686    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971200    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971624    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.973116    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:51.976023  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:51.976035  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:54.539314  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:54.550248  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:54.550316  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:54.577239  109844 cri.go:89] found id: ""
	I1002 20:55:54.577254  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.577261  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:54.577265  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:54.577311  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:54.603907  109844 cri.go:89] found id: ""
	I1002 20:55:54.603927  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.603935  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:54.603941  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:54.603991  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:54.630527  109844 cri.go:89] found id: ""
	I1002 20:55:54.630543  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.630549  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:54.630562  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:54.630624  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:54.658661  109844 cri.go:89] found id: ""
	I1002 20:55:54.658680  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.658688  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:54.658693  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:54.658774  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:54.684747  109844 cri.go:89] found id: ""
	I1002 20:55:54.684769  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.684807  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:54.684814  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:54.684890  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:54.711715  109844 cri.go:89] found id: ""
	I1002 20:55:54.711732  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.711777  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:54.711785  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:54.711842  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:54.738961  109844 cri.go:89] found id: ""
	I1002 20:55:54.738979  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.738987  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:54.738996  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:54.739009  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:54.806223  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:54.806250  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:54.820749  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:54.820771  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:54.877826  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:54.870974    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.871493    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873132    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873593    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.875041    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:54.870974    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.871493    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873132    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873593    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.875041    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:54.877845  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:54.877872  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:54.943126  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:54.943152  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:57.473420  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:57.484300  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:57.484350  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:57.510256  109844 cri.go:89] found id: ""
	I1002 20:55:57.510274  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.510281  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:57.510285  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:57.510350  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:57.536726  109844 cri.go:89] found id: ""
	I1002 20:55:57.536756  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.536766  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:57.536773  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:57.536824  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:57.562388  109844 cri.go:89] found id: ""
	I1002 20:55:57.562407  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.562416  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:57.562421  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:57.562467  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:57.589542  109844 cri.go:89] found id: ""
	I1002 20:55:57.589569  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.589577  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:57.589582  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:57.589647  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:57.616763  109844 cri.go:89] found id: ""
	I1002 20:55:57.616781  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.616790  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:57.616796  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:57.616842  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:57.642618  109844 cri.go:89] found id: ""
	I1002 20:55:57.642637  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.642646  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:57.642652  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:57.642700  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:57.668671  109844 cri.go:89] found id: ""
	I1002 20:55:57.668686  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.668693  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:57.668700  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:57.668714  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:57.733001  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:57.733023  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:57.747314  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:57.747338  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:57.803286  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:57.796365    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.796951    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.798536    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.799065    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.800640    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:57.796365    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.796951    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.798536    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.799065    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.800640    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:57.803303  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:57.803316  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:57.869484  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:57.869515  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:00.399551  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:00.410170  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:00.410218  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:00.436280  109844 cri.go:89] found id: ""
	I1002 20:56:00.436299  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.436306  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:00.436313  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:00.436368  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:00.463444  109844 cri.go:89] found id: ""
	I1002 20:56:00.463461  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.463467  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:00.463479  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:00.463542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:00.489898  109844 cri.go:89] found id: ""
	I1002 20:56:00.489912  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.489919  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:00.489923  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:00.489970  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:00.516907  109844 cri.go:89] found id: ""
	I1002 20:56:00.516925  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.516932  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:00.516937  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:00.516987  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:00.543495  109844 cri.go:89] found id: ""
	I1002 20:56:00.543512  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.543519  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:00.543524  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:00.543575  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:00.569648  109844 cri.go:89] found id: ""
	I1002 20:56:00.569664  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.569670  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:00.569675  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:00.569722  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:00.596695  109844 cri.go:89] found id: ""
	I1002 20:56:00.596712  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.596719  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:00.596726  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:00.596756  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:00.664900  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:00.664923  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:00.679401  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:00.679420  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:00.736278  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:00.729378    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.729909    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731467    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731953    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.733441    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:00.729378    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.729909    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731467    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731953    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.733441    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:00.736292  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:00.736302  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:00.801067  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:00.801089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:03.333225  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:03.344042  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:03.344094  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:03.370652  109844 cri.go:89] found id: ""
	I1002 20:56:03.370668  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.370675  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:03.370680  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:03.370749  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:03.398592  109844 cri.go:89] found id: ""
	I1002 20:56:03.398609  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.398616  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:03.398621  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:03.398675  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:03.425268  109844 cri.go:89] found id: ""
	I1002 20:56:03.425284  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.425292  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:03.425297  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:03.425348  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:03.451631  109844 cri.go:89] found id: ""
	I1002 20:56:03.451645  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.451651  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:03.451655  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:03.451713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:03.476703  109844 cri.go:89] found id: ""
	I1002 20:56:03.476718  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.476728  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:03.476748  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:03.476804  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:03.502825  109844 cri.go:89] found id: ""
	I1002 20:56:03.502840  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.502847  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:03.502852  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:03.502897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:03.530314  109844 cri.go:89] found id: ""
	I1002 20:56:03.530330  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.530337  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:03.530345  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:03.530358  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:03.596281  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:03.596307  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:03.611117  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:03.611135  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:03.669231  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:03.661298    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.661803    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.663484    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.664056    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.665688    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:03.661298    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.661803    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.663484    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.664056    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.665688    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:03.669243  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:03.669254  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:03.735723  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:03.735761  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:06.266853  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:06.278118  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:06.278167  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:06.304229  109844 cri.go:89] found id: ""
	I1002 20:56:06.304246  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.304252  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:06.304258  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:06.304314  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:06.331492  109844 cri.go:89] found id: ""
	I1002 20:56:06.331510  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.331517  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:06.331522  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:06.331574  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:06.357300  109844 cri.go:89] found id: ""
	I1002 20:56:06.357319  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.357328  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:06.357333  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:06.357381  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:06.385072  109844 cri.go:89] found id: ""
	I1002 20:56:06.385092  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.385101  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:06.385107  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:06.385170  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:06.412479  109844 cri.go:89] found id: ""
	I1002 20:56:06.412499  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.412509  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:06.412516  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:06.412571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:06.439019  109844 cri.go:89] found id: ""
	I1002 20:56:06.439035  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.439042  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:06.439049  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:06.439105  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:06.466228  109844 cri.go:89] found id: ""
	I1002 20:56:06.466244  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.466250  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:06.466257  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:06.466268  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:06.530972  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:06.530997  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:06.546016  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:06.546039  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:06.604192  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:06.597141    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.597599    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.599321    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.600026    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.601244    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:06.597141    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.597599    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.599321    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.600026    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.601244    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:06.604215  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:06.604226  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:06.668313  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:06.668341  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:09.199470  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:09.210902  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:09.210947  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:09.237464  109844 cri.go:89] found id: ""
	I1002 20:56:09.237481  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.237488  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:09.237503  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:09.237549  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:09.264849  109844 cri.go:89] found id: ""
	I1002 20:56:09.264868  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.264876  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:09.264884  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:09.264944  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:09.291066  109844 cri.go:89] found id: ""
	I1002 20:56:09.291083  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.291088  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:09.291094  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:09.291141  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:09.316972  109844 cri.go:89] found id: ""
	I1002 20:56:09.316991  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.317001  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:09.317008  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:09.317066  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:09.342462  109844 cri.go:89] found id: ""
	I1002 20:56:09.342479  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.342488  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:09.342494  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:09.342560  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:09.369344  109844 cri.go:89] found id: ""
	I1002 20:56:09.369361  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.369370  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:09.369377  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:09.369431  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:09.396279  109844 cri.go:89] found id: ""
	I1002 20:56:09.396295  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.396301  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:09.396309  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:09.396325  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:09.462471  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:09.462495  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:09.477360  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:09.477379  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:09.533977  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:09.526956    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.527598    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529217    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529656    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.531136    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:09.526956    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.527598    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529217    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529656    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.531136    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:09.533991  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:09.534001  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:09.597829  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:09.597856  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:12.129375  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:12.140711  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:12.140778  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:12.167268  109844 cri.go:89] found id: ""
	I1002 20:56:12.167287  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.167295  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:12.167301  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:12.167351  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:12.193605  109844 cri.go:89] found id: ""
	I1002 20:56:12.193620  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.193625  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:12.193630  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:12.193674  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:12.220258  109844 cri.go:89] found id: ""
	I1002 20:56:12.220272  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.220279  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:12.220284  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:12.220342  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:12.246824  109844 cri.go:89] found id: ""
	I1002 20:56:12.246839  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.246845  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:12.246849  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:12.246897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:12.273611  109844 cri.go:89] found id: ""
	I1002 20:56:12.273631  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.273639  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:12.273647  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:12.273708  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:12.300838  109844 cri.go:89] found id: ""
	I1002 20:56:12.300856  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.300862  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:12.300868  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:12.300916  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:12.328414  109844 cri.go:89] found id: ""
	I1002 20:56:12.328429  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.328435  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:12.328442  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:12.328453  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:12.397603  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:12.397628  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:12.412076  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:12.412093  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:12.469369  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:12.462192    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.462709    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464313    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464791    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.466331    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:12.462192    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.462709    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464313    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464791    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.466331    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:12.469384  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:12.469399  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:12.530104  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:12.530130  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:15.060450  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:15.071089  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:15.071138  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:15.097730  109844 cri.go:89] found id: ""
	I1002 20:56:15.097766  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.097774  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:15.097783  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:15.097832  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:15.123349  109844 cri.go:89] found id: ""
	I1002 20:56:15.123366  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.123376  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:15.123382  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:15.123445  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:15.149644  109844 cri.go:89] found id: ""
	I1002 20:56:15.149659  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.149665  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:15.149670  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:15.149717  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:15.175442  109844 cri.go:89] found id: ""
	I1002 20:56:15.175464  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.175473  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:15.175480  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:15.175534  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:15.200859  109844 cri.go:89] found id: ""
	I1002 20:56:15.200875  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.200881  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:15.200886  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:15.200931  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:15.226770  109844 cri.go:89] found id: ""
	I1002 20:56:15.226786  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.226792  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:15.226797  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:15.226857  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:15.252444  109844 cri.go:89] found id: ""
	I1002 20:56:15.252462  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.252472  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:15.252480  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:15.252493  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:15.281148  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:15.281166  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:15.350382  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:15.350406  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:15.365144  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:15.365163  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:15.421764  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:15.414607    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.415162    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.416781    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.417290    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.418840    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:15.414607    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.415162    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.416781    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.417290    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.418840    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:15.421789  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:15.421802  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:17.982382  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:17.992951  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:17.992999  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:18.018834  109844 cri.go:89] found id: ""
	I1002 20:56:18.018853  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.018862  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:18.018869  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:18.018923  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:18.045169  109844 cri.go:89] found id: ""
	I1002 20:56:18.045186  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.045192  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:18.045196  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:18.045245  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:18.071187  109844 cri.go:89] found id: ""
	I1002 20:56:18.071202  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.071209  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:18.071213  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:18.071263  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:18.099002  109844 cri.go:89] found id: ""
	I1002 20:56:18.099021  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.099031  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:18.099037  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:18.099086  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:18.124458  109844 cri.go:89] found id: ""
	I1002 20:56:18.124474  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.124481  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:18.124486  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:18.124532  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:18.151052  109844 cri.go:89] found id: ""
	I1002 20:56:18.151070  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.151078  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:18.151086  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:18.151147  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:18.177070  109844 cri.go:89] found id: ""
	I1002 20:56:18.177088  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.177097  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:18.177106  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:18.177120  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:18.245531  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:18.245551  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:18.259536  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:18.259555  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:18.315828  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:18.309110    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.309608    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311154    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311572    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.313080    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:18.309110    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.309608    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311154    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311572    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.313080    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:18.315838  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:18.315849  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:18.378894  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:18.378917  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:20.910289  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:20.921508  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:20.921565  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:20.949001  109844 cri.go:89] found id: ""
	I1002 20:56:20.949015  109844 logs.go:282] 0 containers: []
	W1002 20:56:20.949022  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:20.949027  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:20.949073  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:20.975236  109844 cri.go:89] found id: ""
	I1002 20:56:20.975253  109844 logs.go:282] 0 containers: []
	W1002 20:56:20.975259  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:20.975264  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:20.975310  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:21.002161  109844 cri.go:89] found id: ""
	I1002 20:56:21.002176  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.002183  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:21.002188  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:21.002236  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:21.029183  109844 cri.go:89] found id: ""
	I1002 20:56:21.029203  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.029211  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:21.029218  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:21.029291  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:21.056171  109844 cri.go:89] found id: ""
	I1002 20:56:21.056187  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.056193  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:21.056198  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:21.056248  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:21.083782  109844 cri.go:89] found id: ""
	I1002 20:56:21.083801  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.083810  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:21.083817  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:21.083873  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:21.110480  109844 cri.go:89] found id: ""
	I1002 20:56:21.110496  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.110503  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:21.110512  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:21.110526  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:21.178200  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:21.178224  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:21.192348  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:21.192367  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:21.248832  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:21.241470    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.242149    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.243832    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.244309    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.245873    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:21.241470    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.242149    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.243832    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.244309    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.245873    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:21.248843  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:21.248866  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:21.313859  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:21.313939  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:23.844485  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:23.855704  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:23.855785  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:23.881987  109844 cri.go:89] found id: ""
	I1002 20:56:23.882003  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.882009  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:23.882014  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:23.882058  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:23.908092  109844 cri.go:89] found id: ""
	I1002 20:56:23.908109  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.908115  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:23.908121  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:23.908175  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:23.933489  109844 cri.go:89] found id: ""
	I1002 20:56:23.933503  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.933509  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:23.933514  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:23.933560  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:23.958962  109844 cri.go:89] found id: ""
	I1002 20:56:23.958978  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.958985  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:23.958991  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:23.959039  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:23.985206  109844 cri.go:89] found id: ""
	I1002 20:56:23.985222  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.985231  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:23.985237  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:23.985298  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:24.011436  109844 cri.go:89] found id: ""
	I1002 20:56:24.011453  109844 logs.go:282] 0 containers: []
	W1002 20:56:24.011460  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:24.011465  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:24.011512  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:24.036401  109844 cri.go:89] found id: ""
	I1002 20:56:24.036417  109844 logs.go:282] 0 containers: []
	W1002 20:56:24.036423  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:24.036431  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:24.036447  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:24.050446  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:24.050465  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:24.105883  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:24.099062    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.099587    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101050    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101530    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.103091    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:24.099062    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.099587    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101050    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101530    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.103091    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:24.105896  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:24.105906  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:24.165660  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:24.165683  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:24.194659  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:24.194677  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:26.765857  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:26.776723  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:26.776795  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:26.803878  109844 cri.go:89] found id: ""
	I1002 20:56:26.803894  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.803901  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:26.803906  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:26.803960  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:26.828926  109844 cri.go:89] found id: ""
	I1002 20:56:26.828944  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.828950  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:26.828955  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:26.829002  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:26.854812  109844 cri.go:89] found id: ""
	I1002 20:56:26.854828  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.854834  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:26.854840  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:26.854887  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:26.881665  109844 cri.go:89] found id: ""
	I1002 20:56:26.881682  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.881688  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:26.881693  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:26.881763  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:26.909265  109844 cri.go:89] found id: ""
	I1002 20:56:26.909284  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.909294  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:26.909301  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:26.909355  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:26.935117  109844 cri.go:89] found id: ""
	I1002 20:56:26.935133  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.935139  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:26.935144  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:26.935200  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:26.961377  109844 cri.go:89] found id: ""
	I1002 20:56:26.961392  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.961399  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:26.961406  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:26.961417  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:26.989187  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:26.989204  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:27.056354  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:27.056379  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:27.070926  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:27.070944  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:27.127442  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:27.119650    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.120189    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.122490    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.123013    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.124580    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:27.119650    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.120189    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.122490    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.123013    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.124580    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:27.127456  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:27.127473  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:29.687547  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:29.698733  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:29.698810  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:29.724706  109844 cri.go:89] found id: ""
	I1002 20:56:29.724721  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.724727  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:29.724732  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:29.724794  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:29.752274  109844 cri.go:89] found id: ""
	I1002 20:56:29.752291  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.752297  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:29.752308  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:29.752369  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:29.778792  109844 cri.go:89] found id: ""
	I1002 20:56:29.778807  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.778813  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:29.778818  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:29.778867  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:29.804447  109844 cri.go:89] found id: ""
	I1002 20:56:29.804468  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.804485  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:29.804490  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:29.804540  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:29.830280  109844 cri.go:89] found id: ""
	I1002 20:56:29.830301  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.830310  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:29.830316  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:29.830375  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:29.855193  109844 cri.go:89] found id: ""
	I1002 20:56:29.855209  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.855215  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:29.855220  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:29.855270  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:29.881092  109844 cri.go:89] found id: ""
	I1002 20:56:29.881107  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.881114  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:29.881122  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:29.881132  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:29.948531  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:29.948565  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:29.962996  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:29.963015  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:30.019733  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:30.012437    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.013106    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.014710    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.015163    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.016849    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:30.012437    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.013106    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.014710    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.015163    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.016849    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:30.019769  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:30.019784  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:30.080302  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:30.080332  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:32.612620  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:32.623619  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:32.623669  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:32.649868  109844 cri.go:89] found id: ""
	I1002 20:56:32.649884  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.649890  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:32.649895  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:32.649947  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:32.676993  109844 cri.go:89] found id: ""
	I1002 20:56:32.677011  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.677020  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:32.677026  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:32.677084  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:32.703005  109844 cri.go:89] found id: ""
	I1002 20:56:32.703026  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.703036  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:32.703042  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:32.703105  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:32.728641  109844 cri.go:89] found id: ""
	I1002 20:56:32.728657  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.728663  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:32.728668  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:32.728716  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:32.754904  109844 cri.go:89] found id: ""
	I1002 20:56:32.754922  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.754931  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:32.754938  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:32.754996  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:32.780607  109844 cri.go:89] found id: ""
	I1002 20:56:32.780623  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.780632  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:32.780638  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:32.780700  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:32.805534  109844 cri.go:89] found id: ""
	I1002 20:56:32.805549  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.805555  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:32.805564  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:32.805575  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:32.871168  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:32.871190  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:32.885484  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:32.885503  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:32.942338  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:32.935227    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.935814    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937470    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937975    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.939512    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:32.935227    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.935814    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937470    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937975    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.939512    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:32.942348  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:32.942361  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:33.006822  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:33.006849  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:35.539700  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:35.550793  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:35.550843  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:35.577123  109844 cri.go:89] found id: ""
	I1002 20:56:35.577141  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.577152  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:35.577158  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:35.577205  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:35.603414  109844 cri.go:89] found id: ""
	I1002 20:56:35.603429  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.603435  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:35.603440  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:35.603487  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:35.630119  109844 cri.go:89] found id: ""
	I1002 20:56:35.630139  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.630151  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:35.630161  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:35.630216  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:35.656385  109844 cri.go:89] found id: ""
	I1002 20:56:35.656400  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.656406  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:35.656410  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:35.656461  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:35.683092  109844 cri.go:89] found id: ""
	I1002 20:56:35.683109  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.683117  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:35.683121  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:35.683168  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:35.709629  109844 cri.go:89] found id: ""
	I1002 20:56:35.709644  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.709651  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:35.709657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:35.709713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:35.737006  109844 cri.go:89] found id: ""
	I1002 20:56:35.737025  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.737035  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:35.737043  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:35.737054  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:35.767533  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:35.767556  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:35.833953  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:35.833980  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:35.848818  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:35.848839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:35.906998  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:35.899806    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.900358    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.901937    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.902434    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.903965    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:35.899806    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.900358    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.901937    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.902434    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.903965    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:35.907011  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:35.907024  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:38.471319  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:38.481958  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:38.482010  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:38.507711  109844 cri.go:89] found id: ""
	I1002 20:56:38.507730  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.507751  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:38.507758  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:38.507820  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:38.534015  109844 cri.go:89] found id: ""
	I1002 20:56:38.534033  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.534039  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:38.534045  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:38.534096  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:38.561341  109844 cri.go:89] found id: ""
	I1002 20:56:38.561358  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.561367  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:38.561373  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:38.561433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:38.587872  109844 cri.go:89] found id: ""
	I1002 20:56:38.587891  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.587901  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:38.587907  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:38.587973  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:38.612399  109844 cri.go:89] found id: ""
	I1002 20:56:38.612418  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.612427  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:38.612433  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:38.612480  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:38.639104  109844 cri.go:89] found id: ""
	I1002 20:56:38.639120  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.639127  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:38.639132  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:38.639190  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:38.667322  109844 cri.go:89] found id: ""
	I1002 20:56:38.667339  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.667345  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:38.667352  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:38.667363  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:38.682168  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:38.682187  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:38.740651  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:38.733357    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.733969    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.735590    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.736050    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.737649    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:38.733357    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.733969    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.735590    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.736050    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.737649    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:38.740663  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:38.740674  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:38.805774  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:38.805798  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:38.835944  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:38.835962  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:41.406460  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:41.417553  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:41.417620  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:41.444684  109844 cri.go:89] found id: ""
	I1002 20:56:41.444698  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.444705  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:41.444710  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:41.444781  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:41.471352  109844 cri.go:89] found id: ""
	I1002 20:56:41.471370  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.471382  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:41.471390  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:41.471442  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:41.498686  109844 cri.go:89] found id: ""
	I1002 20:56:41.498702  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.498709  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:41.498714  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:41.498785  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:41.524449  109844 cri.go:89] found id: ""
	I1002 20:56:41.524463  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.524469  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:41.524478  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:41.524531  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:41.551827  109844 cri.go:89] found id: ""
	I1002 20:56:41.551845  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.551857  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:41.551864  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:41.551913  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:41.577898  109844 cri.go:89] found id: ""
	I1002 20:56:41.577918  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.577927  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:41.577933  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:41.577989  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:41.604237  109844 cri.go:89] found id: ""
	I1002 20:56:41.604254  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.604261  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:41.604270  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:41.604290  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:41.675907  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:41.675931  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:41.690491  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:41.690509  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:41.749157  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:41.742425    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.742947    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.744615    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.745122    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.746195    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:41.742425    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.742947    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.744615    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.745122    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.746195    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:41.749169  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:41.749184  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:41.815715  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:41.815751  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:44.347532  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:44.358694  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:44.358755  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:44.385917  109844 cri.go:89] found id: ""
	I1002 20:56:44.385932  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.385941  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:44.385946  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:44.385992  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:44.412267  109844 cri.go:89] found id: ""
	I1002 20:56:44.412283  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.412289  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:44.412293  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:44.412344  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:44.439227  109844 cri.go:89] found id: ""
	I1002 20:56:44.439242  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.439249  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:44.439253  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:44.439298  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:44.465395  109844 cri.go:89] found id: ""
	I1002 20:56:44.465411  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.465418  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:44.465423  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:44.465473  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:44.491435  109844 cri.go:89] found id: ""
	I1002 20:56:44.491452  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.491457  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:44.491462  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:44.491508  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:44.517875  109844 cri.go:89] found id: ""
	I1002 20:56:44.517892  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.517899  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:44.517904  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:44.517956  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:44.544412  109844 cri.go:89] found id: ""
	I1002 20:56:44.544428  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.544435  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:44.544443  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:44.544454  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:44.558619  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:44.558637  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:44.615090  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:44.608024    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.608566    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610178    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610634    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.612155    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:44.608024    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.608566    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610178    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610634    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.612155    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:44.615103  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:44.615115  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:44.675486  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:44.675509  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:44.704835  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:44.704853  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:47.280286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:47.291478  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:47.291529  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:47.318560  109844 cri.go:89] found id: ""
	I1002 20:56:47.318581  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.318586  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:47.318594  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:47.318648  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:47.344455  109844 cri.go:89] found id: ""
	I1002 20:56:47.344471  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.344477  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:47.344482  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:47.344527  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:47.370437  109844 cri.go:89] found id: ""
	I1002 20:56:47.370452  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.370458  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:47.370464  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:47.370532  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:47.396657  109844 cri.go:89] found id: ""
	I1002 20:56:47.396672  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.396678  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:47.396682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:47.396751  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:47.422143  109844 cri.go:89] found id: ""
	I1002 20:56:47.422166  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.422172  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:47.422178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:47.422230  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:47.447815  109844 cri.go:89] found id: ""
	I1002 20:56:47.447835  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.447844  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:47.447851  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:47.447910  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:47.473476  109844 cri.go:89] found id: ""
	I1002 20:56:47.473491  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.473498  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:47.473514  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:47.473528  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:47.487700  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:47.487722  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:47.544344  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:47.537160    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.537816    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539394    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539878    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.541420    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:47.537160    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.537816    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539394    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539878    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.541420    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:47.544360  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:47.544370  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:47.605987  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:47.606010  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:47.634796  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:47.634815  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:50.205345  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:50.216795  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:50.216856  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:50.242490  109844 cri.go:89] found id: ""
	I1002 20:56:50.242507  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.242516  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:50.242523  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:50.242599  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:50.269384  109844 cri.go:89] found id: ""
	I1002 20:56:50.269399  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.269405  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:50.269410  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:50.269455  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:50.294810  109844 cri.go:89] found id: ""
	I1002 20:56:50.294830  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.294839  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:50.294847  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:50.294900  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:50.321301  109844 cri.go:89] found id: ""
	I1002 20:56:50.321330  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.321339  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:50.321345  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:50.321396  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:50.348435  109844 cri.go:89] found id: ""
	I1002 20:56:50.348454  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.348463  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:50.348470  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:50.348521  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:50.375520  109844 cri.go:89] found id: ""
	I1002 20:56:50.375537  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.375544  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:50.375550  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:50.375612  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:50.401919  109844 cri.go:89] found id: ""
	I1002 20:56:50.401935  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.401941  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:50.401949  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:50.401960  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:50.474853  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:50.474878  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:50.489483  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:50.489502  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:50.546358  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:50.539620    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.540253    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.541729    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.542224    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.543673    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:50.539620    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.540253    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.541729    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.542224    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.543673    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:50.546371  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:50.546387  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:50.612342  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:50.612365  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:53.143229  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:53.154347  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:53.154399  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:53.179697  109844 cri.go:89] found id: ""
	I1002 20:56:53.179714  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.179722  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:53.179727  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:53.179796  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:53.206078  109844 cri.go:89] found id: ""
	I1002 20:56:53.206094  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.206102  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:53.206107  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:53.206161  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:53.232905  109844 cri.go:89] found id: ""
	I1002 20:56:53.232920  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.232929  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:53.232935  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:53.232990  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:53.258881  109844 cri.go:89] found id: ""
	I1002 20:56:53.258897  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.258903  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:53.258908  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:53.259002  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:53.286005  109844 cri.go:89] found id: ""
	I1002 20:56:53.286020  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.286026  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:53.286031  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:53.286077  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:53.311544  109844 cri.go:89] found id: ""
	I1002 20:56:53.311562  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.311572  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:53.311579  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:53.311642  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:53.338344  109844 cri.go:89] found id: ""
	I1002 20:56:53.338360  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.338366  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:53.338375  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:53.338391  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:53.394654  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:53.387661    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.388633    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.389809    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.390172    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.391803    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:53.387661    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.388633    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.389809    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.390172    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.391803    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:53.394666  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:53.394676  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:53.457101  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:53.457125  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:53.487445  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:53.487464  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:53.560767  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:53.560788  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:56.077698  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:56.088607  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:56.088653  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:56.115831  109844 cri.go:89] found id: ""
	I1002 20:56:56.115851  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.115860  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:56.115873  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:56.115930  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:56.143933  109844 cri.go:89] found id: ""
	I1002 20:56:56.143951  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.143960  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:56.143966  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:56.144013  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:56.170959  109844 cri.go:89] found id: ""
	I1002 20:56:56.170976  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.170983  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:56.170987  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:56.171041  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:56.198476  109844 cri.go:89] found id: ""
	I1002 20:56:56.198493  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.198502  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:56.198507  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:56.198553  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:56.225118  109844 cri.go:89] found id: ""
	I1002 20:56:56.225136  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.225144  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:56.225151  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:56.225203  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:56.250695  109844 cri.go:89] found id: ""
	I1002 20:56:56.250712  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.250719  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:56.250724  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:56.250798  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:56.277912  109844 cri.go:89] found id: ""
	I1002 20:56:56.277927  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.277933  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:56.277939  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:56.277949  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:56.348703  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:56.348726  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:56.363669  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:56.363691  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:56.421487  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:56.414561    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.415193    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.416833    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.417344    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.418421    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:56.414561    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.415193    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.416833    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.417344    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.418421    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:56.421501  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:56.421512  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:56.486234  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:56.486258  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:59.016061  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:59.027120  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:59.027174  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:59.055077  109844 cri.go:89] found id: ""
	I1002 20:56:59.055094  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.055100  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:59.055105  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:59.055154  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:59.080243  109844 cri.go:89] found id: ""
	I1002 20:56:59.080260  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.080267  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:59.080272  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:59.080321  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:59.105555  109844 cri.go:89] found id: ""
	I1002 20:56:59.105573  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.105582  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:59.105588  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:59.105643  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:59.131895  109844 cri.go:89] found id: ""
	I1002 20:56:59.131911  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.131918  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:59.131923  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:59.131971  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:59.158699  109844 cri.go:89] found id: ""
	I1002 20:56:59.158716  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.158724  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:59.158731  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:59.158813  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:59.184528  109844 cri.go:89] found id: ""
	I1002 20:56:59.184547  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.184553  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:59.184558  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:59.184621  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:59.210382  109844 cri.go:89] found id: ""
	I1002 20:56:59.210398  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.210406  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:59.210415  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:59.210435  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:59.274026  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:59.274049  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:59.303182  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:59.303199  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:59.372421  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:59.372446  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:59.388344  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:59.388367  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:59.449053  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:59.441943    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.442636    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.443715    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.444268    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.445829    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:59.441943    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.442636    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.443715    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.444268    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.445829    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:01.950787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:01.962421  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:01.962505  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:01.990756  109844 cri.go:89] found id: ""
	I1002 20:57:01.990774  109844 logs.go:282] 0 containers: []
	W1002 20:57:01.990781  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:01.990786  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:01.990835  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:02.018452  109844 cri.go:89] found id: ""
	I1002 20:57:02.018471  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.018480  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:02.018485  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:02.018568  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:02.046456  109844 cri.go:89] found id: ""
	I1002 20:57:02.046474  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.046481  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:02.046485  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:02.046549  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:02.074761  109844 cri.go:89] found id: ""
	I1002 20:57:02.074781  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.074794  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:02.074799  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:02.074859  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:02.102891  109844 cri.go:89] found id: ""
	I1002 20:57:02.102910  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.102919  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:02.102926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:02.102986  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:02.129478  109844 cri.go:89] found id: ""
	I1002 20:57:02.129496  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.129503  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:02.129509  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:02.129571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:02.157911  109844 cri.go:89] found id: ""
	I1002 20:57:02.157927  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.157934  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:02.157941  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:02.157954  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:02.216970  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:02.209199    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.209824    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211437    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211932    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.213815    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:02.209199    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.209824    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211437    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211932    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.213815    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:02.216979  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:02.216990  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:02.280811  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:02.280839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:02.310062  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:02.310084  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:02.379511  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:02.379536  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:04.894910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:04.906215  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:04.906297  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:04.934307  109844 cri.go:89] found id: ""
	I1002 20:57:04.934323  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.934330  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:04.934335  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:04.934388  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:04.961709  109844 cri.go:89] found id: ""
	I1002 20:57:04.961725  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.961731  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:04.961751  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:04.961803  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:04.988103  109844 cri.go:89] found id: ""
	I1002 20:57:04.988123  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.988134  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:04.988141  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:04.988204  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:05.015267  109844 cri.go:89] found id: ""
	I1002 20:57:05.015282  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.015293  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:05.015298  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:05.015347  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:05.042563  109844 cri.go:89] found id: ""
	I1002 20:57:05.042585  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.042592  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:05.042597  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:05.042648  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:05.070337  109844 cri.go:89] found id: ""
	I1002 20:57:05.070356  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.070365  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:05.070372  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:05.070426  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:05.096592  109844 cri.go:89] found id: ""
	I1002 20:57:05.096607  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.096613  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:05.096622  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:05.096635  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:05.169506  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:05.169529  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:05.184432  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:05.184452  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:05.241625  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:05.234636    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.235167    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.236774    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.237205    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.238801    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:05.234636    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.235167    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.236774    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.237205    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.238801    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:05.241643  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:05.241657  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:05.304652  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:05.304675  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:07.835766  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:07.847178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:07.847237  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:07.873351  109844 cri.go:89] found id: ""
	I1002 20:57:07.873370  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.873380  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:07.873387  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:07.873457  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:07.900684  109844 cri.go:89] found id: ""
	I1002 20:57:07.900700  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.900707  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:07.900713  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:07.900792  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:07.928661  109844 cri.go:89] found id: ""
	I1002 20:57:07.928677  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.928686  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:07.928692  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:07.928763  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:07.954556  109844 cri.go:89] found id: ""
	I1002 20:57:07.954573  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.954583  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:07.954589  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:07.954657  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:07.982818  109844 cri.go:89] found id: ""
	I1002 20:57:07.982833  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.982839  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:07.982845  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:07.982903  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:08.010107  109844 cri.go:89] found id: ""
	I1002 20:57:08.010123  109844 logs.go:282] 0 containers: []
	W1002 20:57:08.010129  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:08.010134  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:08.010183  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:08.037125  109844 cri.go:89] found id: ""
	I1002 20:57:08.037142  109844 logs.go:282] 0 containers: []
	W1002 20:57:08.037150  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:08.037157  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:08.037166  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:08.096417  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:08.096440  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:08.126218  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:08.126239  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:08.194545  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:08.194571  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:08.210281  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:08.210304  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:08.266772  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:08.260009   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.260455   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262035   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262436   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.264034   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:08.260009   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.260455   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262035   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262436   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.264034   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:10.768500  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:10.779701  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:10.779778  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:10.806553  109844 cri.go:89] found id: ""
	I1002 20:57:10.806570  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.806578  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:10.806583  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:10.806628  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:10.831907  109844 cri.go:89] found id: ""
	I1002 20:57:10.831921  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.831938  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:10.831942  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:10.831987  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:10.858755  109844 cri.go:89] found id: ""
	I1002 20:57:10.858773  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.858781  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:10.858786  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:10.858844  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:10.886428  109844 cri.go:89] found id: ""
	I1002 20:57:10.886451  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.886460  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:10.886467  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:10.886528  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:10.912297  109844 cri.go:89] found id: ""
	I1002 20:57:10.912336  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.912344  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:10.912351  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:10.912405  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:10.939017  109844 cri.go:89] found id: ""
	I1002 20:57:10.939037  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.939043  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:10.939050  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:10.939112  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:10.964149  109844 cri.go:89] found id: ""
	I1002 20:57:10.964166  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.964173  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:10.964181  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:10.964192  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:11.035194  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:11.035220  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:11.050083  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:11.050103  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:11.107489  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:11.100162   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.100777   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102350   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102866   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.104475   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:11.100162   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.100777   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102350   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102866   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.104475   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:11.107508  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:11.107525  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:11.168024  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:11.168048  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:13.699241  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:13.709921  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:13.709982  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:13.735975  109844 cri.go:89] found id: ""
	I1002 20:57:13.735994  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.736004  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:13.736010  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:13.736059  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:13.762999  109844 cri.go:89] found id: ""
	I1002 20:57:13.763017  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.763024  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:13.763029  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:13.763082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:13.790647  109844 cri.go:89] found id: ""
	I1002 20:57:13.790667  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.790676  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:13.790682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:13.790753  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:13.816587  109844 cri.go:89] found id: ""
	I1002 20:57:13.816607  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.816617  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:13.816623  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:13.816688  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:13.842814  109844 cri.go:89] found id: ""
	I1002 20:57:13.842829  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.842836  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:13.842841  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:13.842891  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:13.868241  109844 cri.go:89] found id: ""
	I1002 20:57:13.868260  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.868269  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:13.868275  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:13.868327  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:13.895111  109844 cri.go:89] found id: ""
	I1002 20:57:13.895128  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.895138  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:13.895147  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:13.895158  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:13.962125  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:13.962150  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:13.976779  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:13.976795  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:14.033771  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:14.027040   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.027554   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029207   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029659   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.031092   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:14.027040   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.027554   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029207   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029659   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.031092   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:14.033782  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:14.033792  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:14.097410  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:14.097434  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:16.629753  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:16.640873  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:16.640931  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:16.668538  109844 cri.go:89] found id: ""
	I1002 20:57:16.668557  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.668568  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:16.668574  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:16.668633  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:16.697564  109844 cri.go:89] found id: ""
	I1002 20:57:16.697595  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.697605  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:16.697612  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:16.697666  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:16.725228  109844 cri.go:89] found id: ""
	I1002 20:57:16.725242  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.725248  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:16.725253  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:16.725297  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:16.750995  109844 cri.go:89] found id: ""
	I1002 20:57:16.751010  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.751017  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:16.751022  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:16.751066  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:16.777779  109844 cri.go:89] found id: ""
	I1002 20:57:16.777796  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.777803  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:16.777809  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:16.777869  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:16.803504  109844 cri.go:89] found id: ""
	I1002 20:57:16.803521  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.803527  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:16.803532  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:16.803593  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:16.830272  109844 cri.go:89] found id: ""
	I1002 20:57:16.830287  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.830294  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:16.830302  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:16.830313  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:16.902383  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:16.902407  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:16.917396  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:16.917415  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:16.974693  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:16.966376   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.966932   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.968658   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.969953   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.970548   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:16.966376   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.966932   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.968658   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.969953   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.970548   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:16.974702  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:16.974713  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:17.035157  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:17.035179  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:19.566417  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:19.577676  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:19.577746  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:19.604005  109844 cri.go:89] found id: ""
	I1002 20:57:19.604021  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.604027  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:19.604032  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:19.604080  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:19.631397  109844 cri.go:89] found id: ""
	I1002 20:57:19.631415  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.631423  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:19.631433  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:19.631486  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:19.657474  109844 cri.go:89] found id: ""
	I1002 20:57:19.657491  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.657498  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:19.657502  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:19.657550  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:19.683165  109844 cri.go:89] found id: ""
	I1002 20:57:19.683183  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.683240  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:19.683248  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:19.683303  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:19.709607  109844 cri.go:89] found id: ""
	I1002 20:57:19.709623  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.709629  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:19.709634  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:19.709681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:19.736310  109844 cri.go:89] found id: ""
	I1002 20:57:19.736326  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.736333  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:19.736338  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:19.736388  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:19.763087  109844 cri.go:89] found id: ""
	I1002 20:57:19.763103  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.763109  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:19.763117  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:19.763130  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:19.777545  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:19.777563  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:19.835265  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:19.828219   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.828825   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830398   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830870   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.832345   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:19.828219   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.828825   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830398   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830870   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.832345   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:19.835276  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:19.835288  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:19.900559  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:19.900584  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:19.929602  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:19.929620  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:22.502944  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:22.514059  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:22.514108  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:22.540127  109844 cri.go:89] found id: ""
	I1002 20:57:22.540144  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.540152  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:22.540158  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:22.540229  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:22.566906  109844 cri.go:89] found id: ""
	I1002 20:57:22.566920  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.566929  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:22.566936  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:22.566988  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:22.593141  109844 cri.go:89] found id: ""
	I1002 20:57:22.593160  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.593170  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:22.593178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:22.593258  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:22.617379  109844 cri.go:89] found id: ""
	I1002 20:57:22.617395  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.617403  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:22.617408  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:22.617482  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:22.642997  109844 cri.go:89] found id: ""
	I1002 20:57:22.643015  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.643023  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:22.643030  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:22.643088  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:22.669891  109844 cri.go:89] found id: ""
	I1002 20:57:22.669910  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.669918  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:22.669925  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:22.669979  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:22.698537  109844 cri.go:89] found id: ""
	I1002 20:57:22.698553  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.698559  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:22.698571  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:22.698582  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:22.764795  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:22.764818  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:22.779339  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:22.779360  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:22.835541  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:22.828422   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.828970   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.830522   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.831086   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.832606   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:22.828422   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.828970   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.830522   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.831086   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.832606   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:22.835550  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:22.835561  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:22.893791  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:22.893816  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:25.423487  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:25.434946  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:25.435008  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:25.461262  109844 cri.go:89] found id: ""
	I1002 20:57:25.461278  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.461286  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:25.461293  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:25.461373  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:25.487938  109844 cri.go:89] found id: ""
	I1002 20:57:25.487954  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.487960  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:25.487965  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:25.488008  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:25.513819  109844 cri.go:89] found id: ""
	I1002 20:57:25.513833  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.513839  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:25.513844  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:25.513887  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:25.540047  109844 cri.go:89] found id: ""
	I1002 20:57:25.540064  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.540073  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:25.540080  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:25.540218  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:25.565240  109844 cri.go:89] found id: ""
	I1002 20:57:25.565256  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.565262  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:25.565267  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:25.565332  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:25.591199  109844 cri.go:89] found id: ""
	I1002 20:57:25.591214  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.591221  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:25.591226  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:25.591271  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:25.617021  109844 cri.go:89] found id: ""
	I1002 20:57:25.617040  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.617047  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:25.617055  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:25.617071  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:25.674861  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:25.668100   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.668693   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670241   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670676   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.672203   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:25.668100   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.668693   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670241   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670676   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.672203   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:25.674872  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:25.674887  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:25.735460  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:25.735487  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:25.765055  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:25.765071  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:25.833285  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:25.833307  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:28.348626  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:28.359370  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:28.359432  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:28.384665  109844 cri.go:89] found id: ""
	I1002 20:57:28.384681  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.384688  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:28.384692  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:28.384756  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:28.411127  109844 cri.go:89] found id: ""
	I1002 20:57:28.411142  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.411148  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:28.411153  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:28.411198  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:28.439419  109844 cri.go:89] found id: ""
	I1002 20:57:28.439433  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.439439  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:28.439444  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:28.439491  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:28.465419  109844 cri.go:89] found id: ""
	I1002 20:57:28.465434  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.465441  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:28.465446  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:28.465494  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:28.492080  109844 cri.go:89] found id: ""
	I1002 20:57:28.492098  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.492107  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:28.492114  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:28.492171  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:28.518199  109844 cri.go:89] found id: ""
	I1002 20:57:28.518215  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.518221  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:28.518226  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:28.518290  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:28.545226  109844 cri.go:89] found id: ""
	I1002 20:57:28.545241  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.545248  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:28.545255  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:28.545266  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:28.574035  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:28.574055  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:28.640805  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:28.640827  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:28.655177  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:28.655195  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:28.715784  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:28.707733   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.708329   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.710706   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.711235   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.712816   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:28.707733   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.708329   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.710706   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.711235   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.712816   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:28.715802  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:28.715813  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:31.282555  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:31.293415  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:31.293460  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:31.320069  109844 cri.go:89] found id: ""
	I1002 20:57:31.320084  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.320090  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:31.320096  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:31.320141  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:31.347288  109844 cri.go:89] found id: ""
	I1002 20:57:31.347308  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.347315  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:31.347319  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:31.347370  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:31.373910  109844 cri.go:89] found id: ""
	I1002 20:57:31.373926  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.373932  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:31.373936  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:31.373980  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:31.399488  109844 cri.go:89] found id: ""
	I1002 20:57:31.399504  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.399510  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:31.399515  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:31.399579  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:31.425794  109844 cri.go:89] found id: ""
	I1002 20:57:31.425809  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.425815  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:31.425824  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:31.425878  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:31.452232  109844 cri.go:89] found id: ""
	I1002 20:57:31.452247  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.452253  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:31.452258  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:31.452304  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:31.478189  109844 cri.go:89] found id: ""
	I1002 20:57:31.478208  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.478217  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:31.478226  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:31.478239  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:31.535213  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:31.527960   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.528553   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530059   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530507   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.532158   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:31.527960   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.528553   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530059   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530507   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.532158   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:31.535223  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:31.535235  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:31.596390  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:31.596416  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:31.625326  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:31.625347  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:31.695449  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:31.695470  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:34.210847  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:34.221612  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:34.221660  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:34.248100  109844 cri.go:89] found id: ""
	I1002 20:57:34.248118  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.248124  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:34.248129  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:34.248177  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:34.273928  109844 cri.go:89] found id: ""
	I1002 20:57:34.273947  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.273953  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:34.273958  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:34.274004  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:34.300659  109844 cri.go:89] found id: ""
	I1002 20:57:34.300677  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.300684  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:34.300688  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:34.300751  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:34.328932  109844 cri.go:89] found id: ""
	I1002 20:57:34.328950  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.328958  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:34.328964  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:34.329012  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:34.355289  109844 cri.go:89] found id: ""
	I1002 20:57:34.355305  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.355315  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:34.355320  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:34.355371  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:34.381635  109844 cri.go:89] found id: ""
	I1002 20:57:34.381651  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.381658  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:34.381664  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:34.381713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:34.406539  109844 cri.go:89] found id: ""
	I1002 20:57:34.406558  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.406567  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:34.406575  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:34.406586  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:34.476613  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:34.476637  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:34.491529  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:34.491545  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:34.548604  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:34.541411   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.541857   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543425   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543873   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.545469   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:34.541411   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.541857   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543425   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543873   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.545469   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:34.548616  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:34.548627  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:34.614034  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:34.614057  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:37.146000  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:37.156680  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:37.156731  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:37.183104  109844 cri.go:89] found id: ""
	I1002 20:57:37.183120  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.183126  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:37.183130  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:37.183180  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:37.209542  109844 cri.go:89] found id: ""
	I1002 20:57:37.209561  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.209570  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:37.209593  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:37.209651  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:37.236887  109844 cri.go:89] found id: ""
	I1002 20:57:37.236902  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.236907  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:37.236912  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:37.236955  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:37.263572  109844 cri.go:89] found id: ""
	I1002 20:57:37.263590  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.263600  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:37.263606  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:37.263670  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:37.290064  109844 cri.go:89] found id: ""
	I1002 20:57:37.290081  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.290088  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:37.290092  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:37.290140  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:37.315854  109844 cri.go:89] found id: ""
	I1002 20:57:37.315870  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.315877  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:37.315881  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:37.315928  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:37.341863  109844 cri.go:89] found id: ""
	I1002 20:57:37.341881  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.341888  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:37.341896  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:37.341906  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:37.370994  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:37.371009  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:37.436106  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:37.436137  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:37.451121  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:37.451149  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:37.506868  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:37.499823   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.500382   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.501949   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.502458   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.504014   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:37.499823   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.500382   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.501949   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.502458   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.504014   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:37.506882  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:37.506894  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:40.067997  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:40.078961  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:40.079015  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:40.104825  109844 cri.go:89] found id: ""
	I1002 20:57:40.104841  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.104848  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:40.104853  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:40.104901  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:40.131395  109844 cri.go:89] found id: ""
	I1002 20:57:40.131410  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.131417  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:40.131421  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:40.131472  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:40.156879  109844 cri.go:89] found id: ""
	I1002 20:57:40.156894  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.156900  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:40.156904  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:40.156950  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:40.184037  109844 cri.go:89] found id: ""
	I1002 20:57:40.184052  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.184058  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:40.184063  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:40.184109  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:40.209631  109844 cri.go:89] found id: ""
	I1002 20:57:40.209645  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.209652  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:40.209657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:40.209718  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:40.235959  109844 cri.go:89] found id: ""
	I1002 20:57:40.235974  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.235981  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:40.235985  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:40.236031  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:40.263268  109844 cri.go:89] found id: ""
	I1002 20:57:40.263295  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.263303  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:40.263312  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:40.263329  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:40.277655  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:40.277674  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:40.333759  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:40.326797   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.327375   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.328853   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.329279   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.330917   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:40.326797   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.327375   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.328853   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.329279   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.330917   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:40.333771  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:40.333782  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:40.398547  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:40.398573  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:40.429055  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:40.429075  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:43.000960  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:43.011533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:43.011594  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:43.038639  109844 cri.go:89] found id: ""
	I1002 20:57:43.038658  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.038664  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:43.038670  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:43.038718  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:43.064610  109844 cri.go:89] found id: ""
	I1002 20:57:43.064629  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.064638  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:43.064645  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:43.064692  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:43.092797  109844 cri.go:89] found id: ""
	I1002 20:57:43.092814  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.092829  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:43.092836  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:43.092905  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:43.117372  109844 cri.go:89] found id: ""
	I1002 20:57:43.117390  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.117398  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:43.117405  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:43.117455  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:43.143883  109844 cri.go:89] found id: ""
	I1002 20:57:43.143898  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.143903  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:43.143908  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:43.143954  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:43.168684  109844 cri.go:89] found id: ""
	I1002 20:57:43.168703  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.168711  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:43.168719  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:43.168794  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:43.194200  109844 cri.go:89] found id: ""
	I1002 20:57:43.194219  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.194226  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:43.194233  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:43.194243  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:43.224696  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:43.224716  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:43.292485  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:43.292511  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:43.307408  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:43.307426  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:43.365123  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:43.357900   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.358436   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360055   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360531   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.362200   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:43.357900   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.358436   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360055   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360531   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.362200   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:43.365138  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:43.365151  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:45.930176  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:45.940786  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:45.940834  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:45.966149  109844 cri.go:89] found id: ""
	I1002 20:57:45.966163  109844 logs.go:282] 0 containers: []
	W1002 20:57:45.966170  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:45.966174  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:45.966229  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:45.991076  109844 cri.go:89] found id: ""
	I1002 20:57:45.991091  109844 logs.go:282] 0 containers: []
	W1002 20:57:45.991098  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:45.991103  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:45.991160  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:46.016684  109844 cri.go:89] found id: ""
	I1002 20:57:46.016699  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.016707  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:46.016712  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:46.016783  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:46.044048  109844 cri.go:89] found id: ""
	I1002 20:57:46.044066  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.044075  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:46.044080  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:46.044126  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:46.072438  109844 cri.go:89] found id: ""
	I1002 20:57:46.072458  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.072463  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:46.072468  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:46.072513  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:46.098408  109844 cri.go:89] found id: ""
	I1002 20:57:46.098427  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.098435  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:46.098440  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:46.098494  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:46.125237  109844 cri.go:89] found id: ""
	I1002 20:57:46.125253  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.125260  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:46.125267  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:46.125279  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:46.181454  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:46.174705   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.175269   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.176884   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.177274   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.178794   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:46.174705   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.175269   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.176884   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.177274   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.178794   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:46.181465  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:46.181477  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:46.245377  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:46.245400  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:46.273829  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:46.273850  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:46.343515  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:46.343537  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:48.859249  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:48.870377  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:48.870433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:48.897669  109844 cri.go:89] found id: ""
	I1002 20:57:48.897687  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.897694  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:48.897699  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:48.897762  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:48.925008  109844 cri.go:89] found id: ""
	I1002 20:57:48.925023  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.925030  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:48.925036  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:48.925083  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:48.951643  109844 cri.go:89] found id: ""
	I1002 20:57:48.951657  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.951664  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:48.951668  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:48.951714  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:48.979002  109844 cri.go:89] found id: ""
	I1002 20:57:48.979020  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.979029  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:48.979036  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:48.979093  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:49.004625  109844 cri.go:89] found id: ""
	I1002 20:57:49.004641  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.004648  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:49.004652  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:49.004701  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:49.031772  109844 cri.go:89] found id: ""
	I1002 20:57:49.031788  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.031793  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:49.031805  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:49.031862  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:49.057980  109844 cri.go:89] found id: ""
	I1002 20:57:49.057996  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.058004  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:49.058013  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:49.058023  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:49.124248  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:49.124270  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:49.138512  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:49.138533  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:49.195138  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:49.187056   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.188681   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.189138   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.190686   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.191107   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:49.187056   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.188681   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.189138   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.190686   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.191107   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:49.195151  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:49.195173  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:49.258973  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:49.258997  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:51.791466  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:51.802977  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:51.803035  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:51.828498  109844 cri.go:89] found id: ""
	I1002 20:57:51.828514  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.828521  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:51.828526  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:51.828588  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:51.854342  109844 cri.go:89] found id: ""
	I1002 20:57:51.854360  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.854371  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:51.854378  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:51.854456  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:51.880507  109844 cri.go:89] found id: ""
	I1002 20:57:51.880524  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.880532  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:51.880537  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:51.880595  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:51.905868  109844 cri.go:89] found id: ""
	I1002 20:57:51.905885  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.905899  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:51.905906  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:51.905958  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:51.931501  109844 cri.go:89] found id: ""
	I1002 20:57:51.931520  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.931527  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:51.931533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:51.931584  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:51.959507  109844 cri.go:89] found id: ""
	I1002 20:57:51.959531  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.959537  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:51.959543  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:51.959597  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:51.986060  109844 cri.go:89] found id: ""
	I1002 20:57:51.986075  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.986082  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:51.986090  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:51.986102  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:52.001242  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:52.001265  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:52.058943  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:52.051510   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.052186   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.053757   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.054153   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.055841   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:52.051510   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.052186   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.053757   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.054153   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.055841   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:52.058955  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:52.058966  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:52.124165  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:52.124189  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:52.153884  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:52.153905  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:54.722906  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:54.734175  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:54.734232  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:54.759813  109844 cri.go:89] found id: ""
	I1002 20:57:54.759827  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.759834  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:54.759839  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:54.759886  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:54.786211  109844 cri.go:89] found id: ""
	I1002 20:57:54.786228  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.786234  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:54.786238  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:54.786296  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:54.812209  109844 cri.go:89] found id: ""
	I1002 20:57:54.812224  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.812231  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:54.812235  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:54.812279  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:54.838338  109844 cri.go:89] found id: ""
	I1002 20:57:54.838354  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.838359  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:54.838364  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:54.838409  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:54.864235  109844 cri.go:89] found id: ""
	I1002 20:57:54.864250  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.864257  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:54.864262  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:54.864313  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:54.889322  109844 cri.go:89] found id: ""
	I1002 20:57:54.889338  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.889345  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:54.889350  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:54.889408  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:54.914375  109844 cri.go:89] found id: ""
	I1002 20:57:54.914389  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.914396  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:54.914403  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:54.914413  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:54.982673  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:54.982695  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:54.997624  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:54.997643  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:55.054906  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:55.047912   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.048515   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050118   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050555   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.052232   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:55.047912   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.048515   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050118   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050555   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.052232   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:55.054918  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:55.054930  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:55.114767  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:55.114791  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:57.644999  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:57.656449  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:57.656504  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:57.681519  109844 cri.go:89] found id: ""
	I1002 20:57:57.681536  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.681547  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:57.681562  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:57.681613  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:57.707282  109844 cri.go:89] found id: ""
	I1002 20:57:57.707299  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.707306  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:57.707311  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:57.707368  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:57.733730  109844 cri.go:89] found id: ""
	I1002 20:57:57.733764  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.733773  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:57.733779  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:57.733829  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:57.759892  109844 cri.go:89] found id: ""
	I1002 20:57:57.759910  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.759919  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:57.759930  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:57.759977  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:57.786461  109844 cri.go:89] found id: ""
	I1002 20:57:57.786480  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.786488  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:57.786494  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:57.786554  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:57.811498  109844 cri.go:89] found id: ""
	I1002 20:57:57.811513  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.811520  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:57.811525  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:57.811584  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:57.838643  109844 cri.go:89] found id: ""
	I1002 20:57:57.838658  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.838664  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:57.838672  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:57.838683  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:57.903092  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:57.903112  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:57.917294  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:57.917313  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:57.973186  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:57.965977   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.966517   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968135   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968620   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.970155   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:57.965977   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.966517   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968135   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968620   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.970155   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:57.973196  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:57.973206  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:58.037591  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:58.037615  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:00.568697  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:00.579453  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:00.579509  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:00.605205  109844 cri.go:89] found id: ""
	I1002 20:58:00.605221  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.605228  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:00.605236  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:00.605281  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:00.630667  109844 cri.go:89] found id: ""
	I1002 20:58:00.630683  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.630690  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:00.630695  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:00.630779  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:00.656328  109844 cri.go:89] found id: ""
	I1002 20:58:00.656343  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.656349  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:00.656356  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:00.656404  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:00.687352  109844 cri.go:89] found id: ""
	I1002 20:58:00.687372  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.687380  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:00.687387  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:00.687450  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:00.715971  109844 cri.go:89] found id: ""
	I1002 20:58:00.715989  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.715996  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:00.716001  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:00.716051  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:00.743250  109844 cri.go:89] found id: ""
	I1002 20:58:00.743267  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.743274  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:00.743279  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:00.743337  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:00.768377  109844 cri.go:89] found id: ""
	I1002 20:58:00.768394  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.768402  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:00.768410  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:00.768421  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:00.836309  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:00.836330  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:00.851074  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:00.851091  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:00.909067  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:00.901998   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.902472   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904121   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904638   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.906303   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:00.901998   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.902472   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904121   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904638   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.906303   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:00.909078  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:00.909089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:00.967974  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:00.967996  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:03.498950  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:03.509660  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:03.509721  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:03.535662  109844 cri.go:89] found id: ""
	I1002 20:58:03.535677  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.535684  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:03.535689  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:03.535733  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:03.561250  109844 cri.go:89] found id: ""
	I1002 20:58:03.561265  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.561272  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:03.561277  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:03.561321  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:03.587048  109844 cri.go:89] found id: ""
	I1002 20:58:03.587067  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.587076  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:03.587083  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:03.587147  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:03.613674  109844 cri.go:89] found id: ""
	I1002 20:58:03.613690  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.613697  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:03.613702  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:03.613769  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:03.640328  109844 cri.go:89] found id: ""
	I1002 20:58:03.640347  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.640355  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:03.640361  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:03.640422  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:03.666291  109844 cri.go:89] found id: ""
	I1002 20:58:03.666312  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.666319  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:03.666331  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:03.666382  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:03.691967  109844 cri.go:89] found id: ""
	I1002 20:58:03.691985  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.691992  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:03.692006  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:03.692016  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:03.759409  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:03.759439  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:03.774258  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:03.774279  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:03.832338  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:03.825592   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.826120   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.827704   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.828142   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.829691   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:03.825592   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.826120   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.827704   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.828142   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.829691   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:03.832353  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:03.832368  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:03.893996  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:03.894020  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:06.425787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:06.436589  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:06.436637  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:06.462848  109844 cri.go:89] found id: ""
	I1002 20:58:06.462863  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.462870  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:06.462876  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:06.462923  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:06.488755  109844 cri.go:89] found id: ""
	I1002 20:58:06.488775  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.488784  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:06.488790  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:06.488840  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:06.514901  109844 cri.go:89] found id: ""
	I1002 20:58:06.514916  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.514922  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:06.514927  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:06.514970  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:06.541198  109844 cri.go:89] found id: ""
	I1002 20:58:06.541216  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.541222  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:06.541227  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:06.541274  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:06.566811  109844 cri.go:89] found id: ""
	I1002 20:58:06.566829  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.566835  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:06.566839  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:06.566889  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:06.592998  109844 cri.go:89] found id: ""
	I1002 20:58:06.593016  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.593025  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:06.593032  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:06.593082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:06.619126  109844 cri.go:89] found id: ""
	I1002 20:58:06.619142  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.619149  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:06.619156  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:06.619169  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:06.688927  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:06.688949  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:06.703470  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:06.703489  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:06.759531  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:06.752604   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.753172   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.754947   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.755395   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.756902   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:06.752604   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.753172   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.754947   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.755395   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.756902   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:06.759547  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:06.759558  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:06.821429  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:06.821453  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:09.350584  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:09.361407  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:09.361457  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:09.387670  109844 cri.go:89] found id: ""
	I1002 20:58:09.387686  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.387692  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:09.387697  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:09.387769  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:09.414282  109844 cri.go:89] found id: ""
	I1002 20:58:09.414297  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.414303  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:09.414308  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:09.414359  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:09.439986  109844 cri.go:89] found id: ""
	I1002 20:58:09.440004  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.440013  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:09.440021  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:09.440078  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:09.465260  109844 cri.go:89] found id: ""
	I1002 20:58:09.465274  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.465279  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:09.465284  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:09.465342  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:09.490459  109844 cri.go:89] found id: ""
	I1002 20:58:09.490475  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.490485  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:09.490492  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:09.490542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:09.517572  109844 cri.go:89] found id: ""
	I1002 20:58:09.517589  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.517597  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:09.517604  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:09.517657  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:09.543171  109844 cri.go:89] found id: ""
	I1002 20:58:09.543190  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.543200  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:09.543210  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:09.543224  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:09.610811  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:09.610836  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:09.625732  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:09.625765  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:09.684133  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:09.677059   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.677657   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679235   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679641   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.681326   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:09.677059   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.677657   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679235   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679641   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.681326   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:09.684159  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:09.684172  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:09.750121  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:09.750146  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:12.281914  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:12.292614  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:12.292681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:12.319213  109844 cri.go:89] found id: ""
	I1002 20:58:12.319229  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.319236  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:12.319241  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:12.319307  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:12.346475  109844 cri.go:89] found id: ""
	I1002 20:58:12.346491  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.346497  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:12.346506  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:12.346558  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:12.373396  109844 cri.go:89] found id: ""
	I1002 20:58:12.373412  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.373418  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:12.373422  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:12.373472  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:12.399960  109844 cri.go:89] found id: ""
	I1002 20:58:12.399975  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.399984  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:12.399990  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:12.400046  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:12.426115  109844 cri.go:89] found id: ""
	I1002 20:58:12.426134  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.426143  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:12.426148  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:12.426199  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:12.453989  109844 cri.go:89] found id: ""
	I1002 20:58:12.454005  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.454012  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:12.454017  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:12.454082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:12.480468  109844 cri.go:89] found id: ""
	I1002 20:58:12.480482  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.480489  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:12.480497  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:12.480506  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:12.546963  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:12.546987  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:12.561865  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:12.561884  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:12.618630  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:12.611604   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.612174   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.613811   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.614220   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.615797   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:12.611604   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.612174   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.613811   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.614220   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.615797   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:12.618644  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:12.618659  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:12.679779  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:12.679800  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:15.211438  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:15.222920  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:15.222984  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:15.249459  109844 cri.go:89] found id: ""
	I1002 20:58:15.249477  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.249486  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:15.249493  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:15.249563  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:15.275298  109844 cri.go:89] found id: ""
	I1002 20:58:15.275317  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.275324  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:15.275329  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:15.275376  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:15.301700  109844 cri.go:89] found id: ""
	I1002 20:58:15.301716  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.301722  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:15.301730  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:15.301798  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:15.329414  109844 cri.go:89] found id: ""
	I1002 20:58:15.329435  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.329442  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:15.329449  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:15.329509  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:15.355068  109844 cri.go:89] found id: ""
	I1002 20:58:15.355085  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.355093  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:15.355098  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:15.355148  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:15.380359  109844 cri.go:89] found id: ""
	I1002 20:58:15.380376  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.380383  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:15.380388  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:15.380447  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:15.407083  109844 cri.go:89] found id: ""
	I1002 20:58:15.407100  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.407107  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:15.407114  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:15.407125  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:15.475929  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:15.475952  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:15.490571  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:15.490597  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:15.548455  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:15.541509   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.542074   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.543830   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.544263   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.545369   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:15.541509   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.542074   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.543830   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.544263   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.545369   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:15.548470  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:15.548492  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:15.612985  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:15.613011  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:18.144173  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:18.154768  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:18.154839  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:18.181108  109844 cri.go:89] found id: ""
	I1002 20:58:18.181127  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.181135  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:18.181142  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:18.181211  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:18.207541  109844 cri.go:89] found id: ""
	I1002 20:58:18.207557  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.207564  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:18.207568  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:18.207617  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:18.234607  109844 cri.go:89] found id: ""
	I1002 20:58:18.234623  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.234630  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:18.234635  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:18.234682  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:18.262449  109844 cri.go:89] found id: ""
	I1002 20:58:18.262465  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.262471  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:18.262476  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:18.262525  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:18.288587  109844 cri.go:89] found id: ""
	I1002 20:58:18.288604  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.288611  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:18.288615  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:18.288671  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:18.315591  109844 cri.go:89] found id: ""
	I1002 20:58:18.315608  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.315616  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:18.315623  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:18.315686  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:18.341916  109844 cri.go:89] found id: ""
	I1002 20:58:18.341934  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.341943  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:18.341953  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:18.341967  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:18.409370  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:18.409397  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:18.423940  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:18.423957  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:18.481317  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:18.474299   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.474857   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476482   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476953   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.478581   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:18.474299   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.474857   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476482   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476953   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.478581   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:18.481328  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:18.481341  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:18.544851  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:18.544915  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:21.076714  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:21.087984  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:21.088035  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:21.114553  109844 cri.go:89] found id: ""
	I1002 20:58:21.114567  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.114574  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:21.114579  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:21.114627  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:21.140623  109844 cri.go:89] found id: ""
	I1002 20:58:21.140640  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.140647  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:21.140652  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:21.140709  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:21.167287  109844 cri.go:89] found id: ""
	I1002 20:58:21.167303  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.167310  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:21.167314  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:21.167366  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:21.192955  109844 cri.go:89] found id: ""
	I1002 20:58:21.192970  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.192976  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:21.192981  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:21.193026  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:21.218443  109844 cri.go:89] found id: ""
	I1002 20:58:21.218461  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.218470  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:21.218477  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:21.218543  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:21.245610  109844 cri.go:89] found id: ""
	I1002 20:58:21.245629  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.245636  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:21.245641  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:21.245705  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:21.274044  109844 cri.go:89] found id: ""
	I1002 20:58:21.274062  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.274071  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:21.274082  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:21.274094  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:21.344823  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:21.344846  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:21.359586  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:21.359607  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:21.415715  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:21.408650   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.409207   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.410856   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.411238   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.412941   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:21.408650   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.409207   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.410856   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.411238   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.412941   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:21.415727  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:21.415761  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:21.481719  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:21.481748  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:24.012099  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:24.023176  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:24.023230  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:24.048833  109844 cri.go:89] found id: ""
	I1002 20:58:24.048848  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.048854  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:24.048859  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:24.048910  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:24.075718  109844 cri.go:89] found id: ""
	I1002 20:58:24.075734  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.075760  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:24.075767  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:24.075820  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:24.102393  109844 cri.go:89] found id: ""
	I1002 20:58:24.102408  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.102415  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:24.102420  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:24.102470  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:24.128211  109844 cri.go:89] found id: ""
	I1002 20:58:24.128226  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.128233  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:24.128237  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:24.128295  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:24.154298  109844 cri.go:89] found id: ""
	I1002 20:58:24.154317  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.154337  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:24.154342  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:24.154400  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:24.180259  109844 cri.go:89] found id: ""
	I1002 20:58:24.180279  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.180289  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:24.180294  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:24.180343  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:24.206017  109844 cri.go:89] found id: ""
	I1002 20:58:24.206032  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.206038  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:24.206045  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:24.206057  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:24.262477  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:24.255581   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.256099   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.257667   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.258105   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.259636   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:24.255581   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.256099   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.257667   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.258105   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.259636   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:24.262487  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:24.262499  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:24.326558  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:24.326583  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:24.357911  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:24.357927  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:24.425144  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:24.425170  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:26.942340  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:26.953162  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:26.953210  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:26.977629  109844 cri.go:89] found id: ""
	I1002 20:58:26.977645  109844 logs.go:282] 0 containers: []
	W1002 20:58:26.977652  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:26.977656  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:26.977701  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:27.003794  109844 cri.go:89] found id: ""
	I1002 20:58:27.003810  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.003817  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:27.003821  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:27.003871  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:27.031644  109844 cri.go:89] found id: ""
	I1002 20:58:27.031662  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.031669  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:27.031673  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:27.031723  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:27.058490  109844 cri.go:89] found id: ""
	I1002 20:58:27.058522  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.058529  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:27.058533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:27.058580  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:27.083451  109844 cri.go:89] found id: ""
	I1002 20:58:27.083468  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.083475  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:27.083480  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:27.083536  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:27.108449  109844 cri.go:89] found id: ""
	I1002 20:58:27.108467  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.108475  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:27.108481  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:27.108542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:27.135415  109844 cri.go:89] found id: ""
	I1002 20:58:27.135433  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.135441  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:27.135451  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:27.135467  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:27.206016  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:27.206039  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:27.220873  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:27.220894  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:27.276309  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:27.269235   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.269791   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271364   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271799   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.273317   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:27.269235   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.269791   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271364   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271799   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.273317   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:27.276320  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:27.276335  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:27.341398  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:27.341421  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:29.872391  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:29.883459  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:29.883531  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:29.909713  109844 cri.go:89] found id: ""
	I1002 20:58:29.909729  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.909748  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:29.909755  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:29.909806  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:29.934338  109844 cri.go:89] found id: ""
	I1002 20:58:29.934354  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.934360  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:29.934365  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:29.934409  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:29.961900  109844 cri.go:89] found id: ""
	I1002 20:58:29.961917  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.961926  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:29.961932  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:29.961998  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:29.988238  109844 cri.go:89] found id: ""
	I1002 20:58:29.988253  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.988260  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:29.988265  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:29.988328  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:30.013598  109844 cri.go:89] found id: ""
	I1002 20:58:30.013613  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.013619  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:30.013624  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:30.013674  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:30.040799  109844 cri.go:89] found id: ""
	I1002 20:58:30.040817  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.040824  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:30.040829  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:30.040875  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:30.067159  109844 cri.go:89] found id: ""
	I1002 20:58:30.067174  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.067180  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:30.067187  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:30.067199  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:30.081264  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:30.081282  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:30.136411  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:30.129335   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.129861   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131445   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131865   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.133370   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:30.129335   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.129861   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131445   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131865   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.133370   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:30.136422  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:30.136436  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:30.198567  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:30.198599  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:30.226466  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:30.226488  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:32.794266  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:32.805593  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:32.805643  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:32.832000  109844 cri.go:89] found id: ""
	I1002 20:58:32.832015  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.832022  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:32.832027  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:32.832072  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:32.858662  109844 cri.go:89] found id: ""
	I1002 20:58:32.858680  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.858687  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:32.858691  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:32.858758  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:32.884652  109844 cri.go:89] found id: ""
	I1002 20:58:32.884671  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.884679  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:32.884686  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:32.884767  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:32.911548  109844 cri.go:89] found id: ""
	I1002 20:58:32.911571  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.911578  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:32.911583  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:32.911631  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:32.939319  109844 cri.go:89] found id: ""
	I1002 20:58:32.939335  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.939343  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:32.939347  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:32.939396  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:32.965654  109844 cri.go:89] found id: ""
	I1002 20:58:32.965670  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.965677  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:32.965681  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:32.965750  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:32.991821  109844 cri.go:89] found id: ""
	I1002 20:58:32.991837  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.991843  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:32.991851  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:32.991861  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:33.059096  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:33.059118  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:33.074520  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:33.074536  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:33.130853  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:33.124022   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.124509   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126111   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126586   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.128121   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:33.124022   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.124509   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126111   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126586   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.128121   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:33.130867  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:33.130881  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:33.196122  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:33.196146  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:35.728638  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:35.739628  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:35.739676  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:35.764726  109844 cri.go:89] found id: ""
	I1002 20:58:35.764760  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.764771  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:35.764777  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:35.764823  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:35.791011  109844 cri.go:89] found id: ""
	I1002 20:58:35.791026  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.791032  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:35.791037  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:35.791082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:35.817209  109844 cri.go:89] found id: ""
	I1002 20:58:35.817225  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.817231  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:35.817236  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:35.817281  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:35.842125  109844 cri.go:89] found id: ""
	I1002 20:58:35.842139  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.842145  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:35.842154  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:35.842200  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:35.867608  109844 cri.go:89] found id: ""
	I1002 20:58:35.867625  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.867631  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:35.867636  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:35.867681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:35.893798  109844 cri.go:89] found id: ""
	I1002 20:58:35.893813  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.893819  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:35.893824  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:35.893881  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:35.920822  109844 cri.go:89] found id: ""
	I1002 20:58:35.920837  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.920843  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:35.920851  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:35.920862  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:35.982786  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:35.982809  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:36.012445  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:36.012461  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:36.079729  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:36.079764  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:36.094119  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:36.094139  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:36.149838  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:36.142929   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.143480   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145076   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145533   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.147087   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:36.142929   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.143480   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145076   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145533   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.147087   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:38.650569  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:38.661345  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:38.661406  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:38.687690  109844 cri.go:89] found id: ""
	I1002 20:58:38.687709  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.687719  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:38.687729  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:38.687800  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:38.712812  109844 cri.go:89] found id: ""
	I1002 20:58:38.712830  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.712840  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:38.712846  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:38.712897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:38.738922  109844 cri.go:89] found id: ""
	I1002 20:58:38.738938  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.738945  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:38.738951  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:38.739014  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:38.766166  109844 cri.go:89] found id: ""
	I1002 20:58:38.766184  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.766191  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:38.766201  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:38.766259  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:38.793662  109844 cri.go:89] found id: ""
	I1002 20:58:38.793679  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.793687  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:38.793692  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:38.793758  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:38.820204  109844 cri.go:89] found id: ""
	I1002 20:58:38.820225  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.820233  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:38.820242  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:38.820301  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:38.846100  109844 cri.go:89] found id: ""
	I1002 20:58:38.846116  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.846122  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:38.846130  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:38.846143  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:38.912234  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:38.912257  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:38.926642  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:38.926661  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:38.983128  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:38.975680   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.976323   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.977925   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.978355   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.979926   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:38.975680   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.976323   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.977925   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.978355   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.979926   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:38.983140  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:38.983151  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:39.042170  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:39.042192  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:41.573431  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:41.584132  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:41.584179  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:41.610465  109844 cri.go:89] found id: ""
	I1002 20:58:41.610490  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.610500  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:41.610507  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:41.610571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:41.636463  109844 cri.go:89] found id: ""
	I1002 20:58:41.636481  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.636488  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:41.636493  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:41.636544  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:41.663306  109844 cri.go:89] found id: ""
	I1002 20:58:41.663324  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.663334  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:41.663340  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:41.663389  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:41.689945  109844 cri.go:89] found id: ""
	I1002 20:58:41.689963  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.689970  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:41.689975  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:41.690030  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:41.716483  109844 cri.go:89] found id: ""
	I1002 20:58:41.716498  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.716511  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:41.716515  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:41.716563  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:41.741653  109844 cri.go:89] found id: ""
	I1002 20:58:41.741670  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.741677  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:41.741682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:41.741728  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:41.768401  109844 cri.go:89] found id: ""
	I1002 20:58:41.768418  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.768425  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:41.768433  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:41.768444  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:41.825098  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:41.818285   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.818820   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820386   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820857   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.822413   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:41.818285   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.818820   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820386   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820857   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.822413   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:41.825108  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:41.825120  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:41.885569  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:41.885592  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:41.914823  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:41.914840  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:41.982285  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:41.982309  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:44.498020  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:44.508926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:44.508975  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:44.534766  109844 cri.go:89] found id: ""
	I1002 20:58:44.534783  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.534791  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:44.534797  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:44.534849  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:44.561400  109844 cri.go:89] found id: ""
	I1002 20:58:44.561418  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.561425  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:44.561429  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:44.561481  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:44.587621  109844 cri.go:89] found id: ""
	I1002 20:58:44.587638  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.587644  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:44.587649  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:44.587696  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:44.612688  109844 cri.go:89] found id: ""
	I1002 20:58:44.612703  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.612709  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:44.612717  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:44.612784  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:44.639713  109844 cri.go:89] found id: ""
	I1002 20:58:44.639728  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.639755  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:44.639763  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:44.639821  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:44.666252  109844 cri.go:89] found id: ""
	I1002 20:58:44.666271  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.666278  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:44.666283  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:44.666330  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:44.692295  109844 cri.go:89] found id: ""
	I1002 20:58:44.692311  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.692318  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:44.692326  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:44.692336  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:44.763438  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:44.763462  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:44.777919  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:44.777938  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:44.833114  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:44.826286   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.826821   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828377   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828833   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.830344   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:44.826286   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.826821   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828377   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828833   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.830344   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:44.833126  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:44.833138  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:44.893410  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:44.893436  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:47.425929  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:47.437727  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:47.437800  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:47.465106  109844 cri.go:89] found id: ""
	I1002 20:58:47.465125  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.465135  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:47.465141  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:47.465202  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:47.492450  109844 cri.go:89] found id: ""
	I1002 20:58:47.492469  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.492477  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:47.492487  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:47.492548  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:47.518249  109844 cri.go:89] found id: ""
	I1002 20:58:47.518266  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.518273  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:47.518280  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:47.518329  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:47.546009  109844 cri.go:89] found id: ""
	I1002 20:58:47.546026  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.546035  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:47.546040  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:47.546095  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:47.571969  109844 cri.go:89] found id: ""
	I1002 20:58:47.571984  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.571991  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:47.571995  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:47.572044  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:47.598332  109844 cri.go:89] found id: ""
	I1002 20:58:47.598352  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.598362  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:47.598371  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:47.598433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:47.624909  109844 cri.go:89] found id: ""
	I1002 20:58:47.624923  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.624932  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:47.624942  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:47.624955  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:47.682066  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:47.675019   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.675538   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677178   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677660   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.679133   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:47.675019   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.675538   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677178   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677660   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.679133   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:47.682078  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:47.682089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:47.742340  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:47.742363  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:47.772411  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:47.772428  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:47.841816  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:47.841839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:50.357907  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:50.368776  109844 kubeadm.go:601] duration metric: took 4m2.902167912s to restartPrimaryControlPlane
	W1002 20:58:50.368863  109844 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 20:58:50.368929  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:58:50.818759  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:58:50.831475  109844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:58:50.839597  109844 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:58:50.839643  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:58:50.847290  109844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:58:50.847300  109844 kubeadm.go:157] found existing configuration files:
	
	I1002 20:58:50.847341  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:58:50.854889  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:58:50.854928  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:58:50.862239  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:58:50.869705  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:58:50.869763  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:58:50.877993  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:58:50.885836  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:58:50.885887  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:58:50.893993  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:58:50.902316  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:58:50.902371  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:58:50.910549  109844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:58:50.946945  109844 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:58:50.946991  109844 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:58:50.966485  109844 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:58:50.966578  109844 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:58:50.966620  109844 kubeadm.go:318] OS: Linux
	I1002 20:58:50.966677  109844 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:58:50.966753  109844 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:58:50.966809  109844 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:58:50.966867  109844 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:58:50.966933  109844 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:58:50.966988  109844 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:58:50.967043  109844 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:58:50.967090  109844 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:58:51.025471  109844 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:58:51.025621  109844 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:58:51.025764  109844 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:58:51.032580  109844 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:58:51.036477  109844 out.go:252]   - Generating certificates and keys ...
	I1002 20:58:51.036579  109844 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:58:51.036655  109844 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:58:51.036755  109844 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:58:51.036828  109844 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:58:51.036907  109844 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:58:51.036961  109844 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:58:51.037039  109844 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:58:51.037113  109844 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:58:51.037183  109844 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:58:51.037249  109844 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:58:51.037279  109844 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:58:51.037325  109844 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:58:51.187682  109844 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:58:51.260672  109844 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:58:51.923940  109844 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:58:51.962992  109844 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:58:52.022920  109844 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:58:52.023298  109844 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:58:52.025586  109844 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:58:52.027495  109844 out.go:252]   - Booting up control plane ...
	I1002 20:58:52.027608  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:58:52.027713  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:58:52.027804  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:58:52.042406  109844 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:58:52.042511  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:58:52.049022  109844 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:58:52.049337  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:58:52.049378  109844 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:58:52.155568  109844 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:58:52.155766  109844 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:58:53.156432  109844 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000945383s
	I1002 20:58:53.159662  109844 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:58:53.159797  109844 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:58:53.159937  109844 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:58:53.160043  109844 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:02:53.160214  109844 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000318497s
	I1002 21:02:53.160391  109844 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00035696s
	I1002 21:02:53.160519  109844 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000784779s
	I1002 21:02:53.160527  109844 kubeadm.go:318] 
	I1002 21:02:53.160620  109844 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:02:53.160688  109844 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:02:53.160785  109844 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:02:53.160862  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:02:53.160927  109844 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:02:53.161001  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:02:53.161004  109844 kubeadm.go:318] 
	I1002 21:02:53.164399  109844 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:02:53.164524  109844 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:02:53.165091  109844 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 21:02:53.165168  109844 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:02:53.165349  109844 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000945383s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000318497s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00035696s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000784779s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:02:53.165441  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:02:53.609874  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:02:53.623007  109844 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:02:53.623061  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:02:53.631223  109844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:02:53.631235  109844 kubeadm.go:157] found existing configuration files:
	
	I1002 21:02:53.631283  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 21:02:53.639093  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:02:53.639137  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:02:53.647228  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 21:02:53.655566  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:02:53.655610  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:02:53.663430  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 21:02:53.671338  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:02:53.671390  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:02:53.679032  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 21:02:53.686944  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:02:53.686993  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:02:53.694170  109844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:02:53.730792  109844 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:02:53.730837  109844 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:02:53.752207  109844 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:02:53.752260  109844 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:02:53.752295  109844 kubeadm.go:318] OS: Linux
	I1002 21:02:53.752337  109844 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:02:53.752403  109844 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:02:53.752440  109844 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:02:53.752485  109844 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:02:53.752585  109844 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:02:53.752641  109844 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:02:53.752685  109844 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:02:53.752720  109844 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:02:53.811160  109844 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:02:53.811301  109844 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:02:53.811426  109844 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:02:53.817686  109844 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:02:53.822264  109844 out.go:252]   - Generating certificates and keys ...
	I1002 21:02:53.822366  109844 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:02:53.822429  109844 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:02:53.822500  109844 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:02:53.822558  109844 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:02:53.822649  109844 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:02:53.822721  109844 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:02:53.822797  109844 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:02:53.822883  109844 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:02:53.822984  109844 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:02:53.823080  109844 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:02:53.823129  109844 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:02:53.823200  109844 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:02:54.089650  109844 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:02:54.165018  109844 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:02:54.351562  109844 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:02:54.606636  109844 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:02:54.799514  109844 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:02:54.799929  109844 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:02:54.802220  109844 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:02:54.804402  109844 out.go:252]   - Booting up control plane ...
	I1002 21:02:54.804516  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:02:54.804616  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:02:54.804724  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:02:54.818368  109844 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:02:54.818509  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:02:54.825531  109844 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:02:54.826683  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:02:54.826734  109844 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:02:54.927546  109844 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:02:54.927690  109844 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:02:55.429241  109844 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.893032ms
	I1002 21:02:55.432296  109844 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:02:55.432407  109844 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 21:02:55.432483  109844 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:02:55.432583  109844 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:06:55.432671  109844 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	I1002 21:06:55.432869  109844 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	I1002 21:06:55.432961  109844 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	I1002 21:06:55.432968  109844 kubeadm.go:318] 
	I1002 21:06:55.433037  109844 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:06:55.433100  109844 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:06:55.433168  109844 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:06:55.433259  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:06:55.433328  109844 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:06:55.433419  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:06:55.433434  109844 kubeadm.go:318] 
	I1002 21:06:55.436835  109844 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:06:55.436949  109844 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:06:55.437474  109844 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:06:55.437568  109844 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:06:55.437594  109844 kubeadm.go:402] duration metric: took 12m8.007755847s to StartCluster
	I1002 21:06:55.437641  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:06:55.437710  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:06:55.464382  109844 cri.go:89] found id: ""
	I1002 21:06:55.464398  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.464404  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:06:55.464409  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:06:55.464469  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:06:55.490606  109844 cri.go:89] found id: ""
	I1002 21:06:55.490623  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.490633  109844 logs.go:284] No container was found matching "etcd"
	I1002 21:06:55.490638  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:06:55.490702  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:06:55.516529  109844 cri.go:89] found id: ""
	I1002 21:06:55.516547  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.516560  109844 logs.go:284] No container was found matching "coredns"
	I1002 21:06:55.516565  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:06:55.516631  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:06:55.542896  109844 cri.go:89] found id: ""
	I1002 21:06:55.542913  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.542919  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:06:55.542926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:06:55.542976  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:06:55.570192  109844 cri.go:89] found id: ""
	I1002 21:06:55.570206  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.570212  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:06:55.570217  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:06:55.570263  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:06:55.596069  109844 cri.go:89] found id: ""
	I1002 21:06:55.596092  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.596102  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:06:55.596107  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:06:55.596157  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:06:55.621555  109844 cri.go:89] found id: ""
	I1002 21:06:55.621572  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.621579  109844 logs.go:284] No container was found matching "kindnet"
	I1002 21:06:55.621587  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 21:06:55.621600  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:06:55.635371  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:06:55.635389  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:06:55.691316  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:06:55.684497   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.685072   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.686619   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.687074   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.688662   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:06:55.684497   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.685072   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.686619   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.687074   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.688662   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:06:55.691337  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:06:55.691347  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:06:55.755862  109844 logs.go:123] Gathering logs for container status ...
	I1002 21:06:55.755886  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:06:55.784730  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 21:06:55.784767  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1002 21:06:55.854494  109844 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:06:55.854545  109844 out.go:285] * 
	W1002 21:06:55.854631  109844 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:06:55.854657  109844 out.go:285] * 
	W1002 21:06:55.856372  109844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:06:55.860308  109844 out.go:203] 
	W1002 21:06:55.861642  109844 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:06:55.861662  109844 out.go:285] * 
	I1002 21:06:55.863851  109844 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.229621183Z" level=info msg="createCtr: removing container 1beefe15b63b796e652c01ac1f61b13690321cfccbd88674e7a5b2a56d2579c4" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.229659341Z" level=info msg="createCtr: deleting container 1beefe15b63b796e652c01ac1f61b13690321cfccbd88674e7a5b2a56d2579c4 from storage" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.231972859Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-012915_kube-system_d8a261ecdc32dae77705c4d6c0276f2f_0" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.205202556Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=db075587-8f32-464c-9e5e-46c1b2623e7b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.206210632Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=95dc66c0-7314-42be-9120-81260968bf88 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.207175944Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-012915/kube-controller-manager" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.207440039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.212436931Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.212976081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.231136681Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.232746016Z" level=info msg="createCtr: deleting container ID 940deb61e07e3c430096de3c07f5adf9446cf8c0b1ea53018286d264947b97eb from idIndex" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.232798364Z" level=info msg="createCtr: removing container 940deb61e07e3c430096de3c07f5adf9446cf8c0b1ea53018286d264947b97eb" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.232834131Z" level=info msg="createCtr: deleting container 940deb61e07e3c430096de3c07f5adf9446cf8c0b1ea53018286d264947b97eb from storage" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.234843413Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-012915_kube-system_7e750209f40bc1241cc38d19476e612c_0" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.205198785Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c19f64ff-9f66-4a07-ad68-475d90819996 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.206616651Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=0944eb73-36b5-4739-b2a1-da68c935ff0a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.20799091Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-012915/kube-apiserver" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.208380884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.214618925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.215607645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.2302641Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.231957623Z" level=info msg="createCtr: deleting container ID b28bd02bfbafe506bc770bf054febc7e12b50c57efb3b0059baa9489b9a0e394 from idIndex" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.232026593Z" level=info msg="createCtr: removing container b28bd02bfbafe506bc770bf054febc7e12b50c57efb3b0059baa9489b9a0e394" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.232070563Z" level=info msg="createCtr: deleting container b28bd02bfbafe506bc770bf054febc7e12b50c57efb3b0059baa9489b9a0e394 from storage" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.236500722Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-012915_kube-system_7482f03c4ea15852236655655d7fae39_0" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:07:03.864276   16532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:03.864982   16532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:03.866659   16532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:03.868277   16532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:03.868826   16532 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:07:03 up  2:49,  0 user,  load average: 1.19, 0.28, 0.26
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.232329   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:06:55 functional-012915 kubelet[14964]:         container etcd start failed in pod etcd-functional-012915_kube-system(d8a261ecdc32dae77705c4d6c0276f2f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:06:55 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:06:55 functional-012915 kubelet[14964]: E1002 21:06:55.232366   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-012915" podUID="d8a261ecdc32dae77705c4d6c0276f2f"
	Oct 02 21:06:58 functional-012915 kubelet[14964]: E1002 21:06:58.830030   14964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 21:06:58 functional-012915 kubelet[14964]: I1002 21:06:58.986288   14964 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 21:06:58 functional-012915 kubelet[14964]: E1002 21:06:58.986748   14964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 21:07:00 functional-012915 kubelet[14964]: E1002 21:07:00.204682   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:07:00 functional-012915 kubelet[14964]: E1002 21:07:00.235123   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:07:00 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:00 functional-012915 kubelet[14964]:  > podSandboxID="78541c97616f3ec4e232f9ab35845168ea396e7284f2b19d4d8b8efd1c5094a2"
	Oct 02 21:07:00 functional-012915 kubelet[14964]: E1002 21:07:00.235224   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:07:00 functional-012915 kubelet[14964]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-012915_kube-system(7e750209f40bc1241cc38d19476e612c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:00 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:00 functional-012915 kubelet[14964]: E1002 21:07:00.235258   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-012915" podUID="7e750209f40bc1241cc38d19476e612c"
	Oct 02 21:07:01 functional-012915 kubelet[14964]: E1002 21:07:01.168800   14964 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 21:07:01 functional-012915 kubelet[14964]: E1002 21:07:01.351347   14964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac86d10977047  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,LastTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-012915,}"
	Oct 02 21:07:03 functional-012915 kubelet[14964]: E1002 21:07:03.204593   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:07:03 functional-012915 kubelet[14964]: E1002 21:07:03.236928   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:07:03 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:03 functional-012915 kubelet[14964]:  > podSandboxID="a129e9a2f94a7f43841dcb70e9f797b91d229fda437bd3abc02ab094cc4c3749"
	Oct 02 21:07:03 functional-012915 kubelet[14964]: E1002 21:07:03.237038   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:07:03 functional-012915 kubelet[14964]:         container kube-apiserver start failed in pod kube-apiserver-functional-012915_kube-system(7482f03c4ea15852236655655d7fae39): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:03 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:03 functional-012915 kubelet[14964]: E1002 21:07:03.237078   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-012915" podUID="7482f03c4ea15852236655655d7fae39"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (321.815276ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-012915 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-012915 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (54.594992ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-012915 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-012915 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-012915 describe po hello-node-connect: exit status 1 (52.376154ms)

                                                
                                                
** stderr ** 
	E1002 21:07:04.945067  126672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:04.945880  126672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:04.947639  126672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:04.948074  126672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:04.949566  126672 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-012915 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-012915 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-012915 logs -l app=hello-node-connect: exit status 1 (60.056866ms)

                                                
                                                
** stderr ** 
	E1002 21:07:05.069122  126682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:05.069516  126682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:05.071001  126682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:05.071259  126682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-012915 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-012915 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-012915 describe svc hello-node-connect: exit status 1 (58.555777ms)

                                                
                                                
** stderr ** 
	E1002 21:07:05.126927  126709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:05.127299  126709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:05.128848  126709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:05.129143  126709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:05.130567  126709 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-012915 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (310.179326ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p functional-012915 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                  │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 20:54 UTC │                     │
	│ cp      │ functional-012915 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ config  │ functional-012915 config unset cpus                                                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ service │ functional-012915 service list                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ config  │ functional-012915 config get cpus                                                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ config  │ functional-012915 config set cpus 2                                                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ config  │ functional-012915 config get cpus                                                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ config  │ functional-012915 config unset cpus                                                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh     │ functional-012915 ssh -n functional-012915 sudo cat /home/docker/cp-test.txt                                              │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ config  │ functional-012915 config get cpus                                                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ service │ functional-012915 service list -o json                                                                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh     │ functional-012915 ssh echo hello                                                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ cp      │ functional-012915 cp functional-012915:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd418601657/001/cp-test.txt │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ service │ functional-012915 service --namespace=default --https --url hello-node                                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh     │ functional-012915 ssh cat /etc/hostname                                                                                   │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh     │ functional-012915 ssh -n functional-012915 sudo cat /home/docker/cp-test.txt                                              │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ service │ functional-012915 service hello-node --url --format={{.IP}}                                                               │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ tunnel  │ functional-012915 tunnel --alsologtostderr                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ tunnel  │ functional-012915 tunnel --alsologtostderr                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ service │ functional-012915 service hello-node --url                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ cp      │ functional-012915 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ tunnel  │ functional-012915 tunnel --alsologtostderr                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh     │ functional-012915 ssh -n functional-012915 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ addons  │ functional-012915 addons list                                                                                             │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ addons  │ functional-012915 addons list -o json                                                                                     │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:54:43
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:54:43.844587  109844 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:54:43.844861  109844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:43.844865  109844 out.go:374] Setting ErrFile to fd 2...
	I1002 20:54:43.844868  109844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:54:43.845038  109844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:54:43.845491  109844 out.go:368] Setting JSON to false
	I1002 20:54:43.846405  109844 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9425,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:54:43.846500  109844 start.go:140] virtualization: kvm guest
	I1002 20:54:43.848999  109844 out.go:179] * [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:54:43.850877  109844 notify.go:220] Checking for updates...
	I1002 20:54:43.850921  109844 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 20:54:43.852793  109844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:54:43.854834  109844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:54:43.856692  109844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:54:43.858365  109844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 20:54:43.860403  109844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 20:54:43.863103  109844 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:54:43.863204  109844 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:54:43.889469  109844 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:54:43.889551  109844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:43.945234  109844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:54:43.934776618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:54:43.945360  109844 docker.go:318] overlay module found
	I1002 20:54:43.947426  109844 out.go:179] * Using the docker driver based on existing profile
	I1002 20:54:43.949164  109844 start.go:304] selected driver: docker
	I1002 20:54:43.949174  109844 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:43.949277  109844 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 20:54:43.949355  109844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:54:44.006056  109844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:false NGoroutines:58 SystemTime:2025-10-02 20:54:43.996347889 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:54:44.006730  109844 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 20:54:44.006766  109844 cni.go:84] Creating CNI manager for ""
	I1002 20:54:44.006828  109844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:54:44.006872  109844 start.go:348] cluster config:
	{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:44.008980  109844 out.go:179] * Starting "functional-012915" primary control-plane node in "functional-012915" cluster
	I1002 20:54:44.010355  109844 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 20:54:44.011760  109844 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 20:54:44.012903  109844 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:54:44.012938  109844 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 20:54:44.012951  109844 cache.go:58] Caching tarball of preloaded images
	I1002 20:54:44.012993  109844 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 20:54:44.013033  109844 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 20:54:44.013038  109844 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 20:54:44.013135  109844 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/config.json ...
	I1002 20:54:44.033578  109844 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 20:54:44.033589  109844 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 20:54:44.033606  109844 cache.go:232] Successfully downloaded all kic artifacts
	I1002 20:54:44.033634  109844 start.go:360] acquireMachinesLock for functional-012915: {Name:mk05b0465db6f8234fcb55c21a78a37886923b46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 20:54:44.033690  109844 start.go:364] duration metric: took 42.12µs to acquireMachinesLock for "functional-012915"
	I1002 20:54:44.033704  109844 start.go:96] Skipping create...Using existing machine configuration
	I1002 20:54:44.033708  109844 fix.go:54] fixHost starting: 
	I1002 20:54:44.033949  109844 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 20:54:44.051193  109844 fix.go:112] recreateIfNeeded on functional-012915: state=Running err=<nil>
	W1002 20:54:44.051212  109844 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 20:54:44.053363  109844 out.go:252] * Updating the running docker "functional-012915" container ...
	I1002 20:54:44.053388  109844 machine.go:93] provisionDockerMachine start ...
	I1002 20:54:44.053449  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.071022  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.071263  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.071270  109844 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 20:54:44.215777  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:54:44.215796  109844 ubuntu.go:182] provisioning hostname "functional-012915"
	I1002 20:54:44.215846  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.233786  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.234003  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.234012  109844 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-012915 && echo "functional-012915" | sudo tee /etc/hostname
	I1002 20:54:44.386648  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-012915
	
	I1002 20:54:44.386732  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.405002  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.405287  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.405300  109844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-012915' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-012915/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-012915' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 20:54:44.550595  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 20:54:44.550613  109844 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 20:54:44.550630  109844 ubuntu.go:190] setting up certificates
	I1002 20:54:44.550637  109844 provision.go:84] configureAuth start
	I1002 20:54:44.550684  109844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:54:44.568931  109844 provision.go:143] copyHostCerts
	I1002 20:54:44.568985  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 20:54:44.569001  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 20:54:44.569078  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 20:54:44.569204  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 20:54:44.569210  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 20:54:44.569250  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 20:54:44.569359  109844 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 20:54:44.569365  109844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 20:54:44.569398  109844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 20:54:44.569559  109844 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.functional-012915 san=[127.0.0.1 192.168.49.2 functional-012915 localhost minikube]
	I1002 20:54:44.708488  109844 provision.go:177] copyRemoteCerts
	I1002 20:54:44.708542  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 20:54:44.708581  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.726299  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:44.828230  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1002 20:54:44.845801  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 20:54:44.864647  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 20:54:44.886083  109844 provision.go:87] duration metric: took 335.431145ms to configureAuth
	I1002 20:54:44.886105  109844 ubuntu.go:206] setting minikube options for container-runtime
	I1002 20:54:44.886322  109844 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 20:54:44.886449  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:44.904652  109844 main.go:141] libmachine: Using SSH client type: native
	I1002 20:54:44.904873  109844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1002 20:54:44.904882  109844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 20:54:45.179966  109844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 20:54:45.179982  109844 machine.go:96] duration metric: took 1.12658745s to provisionDockerMachine
	I1002 20:54:45.179993  109844 start.go:293] postStartSetup for "functional-012915" (driver="docker")
	I1002 20:54:45.180006  109844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 20:54:45.180072  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 20:54:45.180106  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.198206  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.300487  109844 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 20:54:45.304200  109844 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 20:54:45.304220  109844 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 20:54:45.304236  109844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 20:54:45.304298  109844 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 20:54:45.304376  109844 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 20:54:45.304448  109844 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts -> hosts in /etc/test/nested/copy/84100
	I1002 20:54:45.304489  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/84100
	I1002 20:54:45.312033  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:54:45.329488  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts --> /etc/test/nested/copy/84100/hosts (40 bytes)
	I1002 20:54:45.347685  109844 start.go:296] duration metric: took 167.67425ms for postStartSetup
	I1002 20:54:45.347776  109844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 20:54:45.347829  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.365819  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.465348  109844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 20:54:45.470042  109844 fix.go:56] duration metric: took 1.436324828s for fixHost
	I1002 20:54:45.470060  109844 start.go:83] releasing machines lock for "functional-012915", held for 1.436363927s
	I1002 20:54:45.470140  109844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-012915
	I1002 20:54:45.487689  109844 ssh_runner.go:195] Run: cat /version.json
	I1002 20:54:45.487729  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.487802  109844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 20:54:45.487851  109844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 20:54:45.505570  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.507416  109844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 20:54:45.673212  109844 ssh_runner.go:195] Run: systemctl --version
	I1002 20:54:45.680090  109844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 20:54:45.716457  109844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 20:54:45.721126  109844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 20:54:45.721199  109844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 20:54:45.729223  109844 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 20:54:45.729241  109844 start.go:495] detecting cgroup driver to use...
	I1002 20:54:45.729276  109844 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 20:54:45.729332  109844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 20:54:45.744221  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 20:54:45.757221  109844 docker.go:218] disabling cri-docker service (if available) ...
	I1002 20:54:45.757262  109844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 20:54:45.772166  109844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 20:54:45.785276  109844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 20:54:45.871303  109844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 20:54:45.959396  109844 docker.go:234] disabling docker service ...
	I1002 20:54:45.959460  109844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 20:54:45.974048  109844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 20:54:45.986376  109844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 20:54:46.071815  109844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 20:54:46.159773  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 20:54:46.172020  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 20:54:46.186483  109844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 20:54:46.186540  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.195504  109844 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 20:54:46.195591  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.205033  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.213732  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.222589  109844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 20:54:46.230603  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.239758  109844 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.248194  109844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 20:54:46.256956  109844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 20:54:46.264263  109844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 20:54:46.271577  109844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:54:46.354483  109844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 20:54:46.464818  109844 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 20:54:46.464871  109844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 20:54:46.468860  109844 start.go:563] Will wait 60s for crictl version
	I1002 20:54:46.468905  109844 ssh_runner.go:195] Run: which crictl
	I1002 20:54:46.472439  109844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 20:54:46.496177  109844 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 20:54:46.496237  109844 ssh_runner.go:195] Run: crio --version
	I1002 20:54:46.524348  109844 ssh_runner.go:195] Run: crio --version
	I1002 20:54:46.554038  109844 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 20:54:46.555482  109844 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 20:54:46.572825  109844 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 20:54:46.579140  109844 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1002 20:54:46.580455  109844 kubeadm.go:883] updating cluster {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 20:54:46.580599  109844 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 20:54:46.580680  109844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:54:46.615204  109844 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:54:46.615216  109844 crio.go:433] Images already preloaded, skipping extraction
	I1002 20:54:46.615259  109844 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 20:54:46.641403  109844 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 20:54:46.641428  109844 cache_images.go:85] Images are preloaded, skipping loading
	I1002 20:54:46.641435  109844 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1002 20:54:46.641523  109844 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-012915 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 20:54:46.641593  109844 ssh_runner.go:195] Run: crio config
	I1002 20:54:46.685535  109844 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1002 20:54:46.685558  109844 cni.go:84] Creating CNI manager for ""
	I1002 20:54:46.685570  109844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 20:54:46.685580  109844 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 20:54:46.685603  109844 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-012915 NodeName:functional-012915 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map
[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 20:54:46.685708  109844 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-012915"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 20:54:46.685786  109844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 20:54:46.694168  109844 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 20:54:46.694220  109844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 20:54:46.701920  109844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1002 20:54:46.714502  109844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 20:54:46.726979  109844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2063 bytes)
	I1002 20:54:46.739184  109844 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 20:54:46.742937  109844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 20:54:46.828267  109844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 20:54:46.841290  109844 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915 for IP: 192.168.49.2
	I1002 20:54:46.841302  109844 certs.go:195] generating shared ca certs ...
	I1002 20:54:46.841315  109844 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 20:54:46.841439  109844 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 20:54:46.841480  109844 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 20:54:46.841486  109844 certs.go:257] generating profile certs ...
	I1002 20:54:46.841556  109844 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.key
	I1002 20:54:46.841595  109844 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key.b416a645
	I1002 20:54:46.841625  109844 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key
	I1002 20:54:46.841728  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 20:54:46.841789  109844 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 20:54:46.841795  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 20:54:46.841816  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 20:54:46.841847  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 20:54:46.841870  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 20:54:46.841921  109844 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 20:54:46.842546  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 20:54:46.860833  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 20:54:46.878996  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 20:54:46.897504  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 20:54:46.914816  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 20:54:46.931903  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 20:54:46.948901  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 20:54:46.965859  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 20:54:46.982982  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 20:54:47.000600  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 20:54:47.018108  109844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 20:54:47.035448  109844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 20:54:47.047886  109844 ssh_runner.go:195] Run: openssl version
	I1002 20:54:47.053789  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 20:54:47.062187  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.066098  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.066148  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 20:54:47.100024  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 20:54:47.108632  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 20:54:47.118249  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.122176  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.122226  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 20:54:47.156807  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 20:54:47.165260  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 20:54:47.173954  109844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.177825  109844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.177879  109844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 20:54:47.212057  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 20:54:47.220716  109844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 20:54:47.224961  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 20:54:47.259305  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 20:54:47.293091  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 20:54:47.327486  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 20:54:47.361854  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 20:54:47.395871  109844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 20:54:47.429860  109844 kubeadm.go:400] StartCluster: {Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 20:54:47.429950  109844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 20:54:47.429996  109844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:54:47.458514  109844 cri.go:89] found id: ""
	I1002 20:54:47.458565  109844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 20:54:47.466572  109844 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 20:54:47.466595  109844 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 20:54:47.466642  109844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 20:54:47.473967  109844 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.474578  109844 kubeconfig.go:125] found "functional-012915" server: "https://192.168.49.2:8441"
	I1002 20:54:47.476054  109844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 20:54:47.483705  109844 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-02 20:40:16.332502550 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-02 20:54:46.736875917 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1002 20:54:47.483713  109844 kubeadm.go:1160] stopping kube-system containers ...
	I1002 20:54:47.483724  109844 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1002 20:54:47.483782  109844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 20:54:47.509815  109844 cri.go:89] found id: ""
	I1002 20:54:47.509892  109844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 20:54:47.553124  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:54:47.561262  109844 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  2 20:44 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  2 20:44 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5676 Oct  2 20:44 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  2 20:44 /etc/kubernetes/scheduler.conf
	
	I1002 20:54:47.561322  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:54:47.569534  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:54:47.577441  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.577491  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:54:47.585032  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:54:47.592533  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.592581  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:54:47.600040  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:54:47.607328  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1002 20:54:47.607365  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:54:47.614787  109844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:54:47.622401  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:47.663022  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.396196  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.576311  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.625411  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 20:54:48.679287  109844 api_server.go:52] waiting for apiserver process to appear ...
	I1002 20:54:48.679369  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:49.179574  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:49.679973  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:50.180317  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:50.680215  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:51.179826  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:51.679618  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:52.180390  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:52.679884  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:53.180480  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:53.679973  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:54.180264  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:54.679704  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:55.179880  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:55.679789  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:56.179784  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:56.679611  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:57.179499  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:57.680068  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:58.179593  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:58.680342  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:59.180363  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:54:59.679719  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:00.180464  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:00.680219  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:01.179572  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:01.679989  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:02.179867  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:02.680465  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:03.179787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:03.680167  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:04.179791  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:04.679910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:05.179712  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:05.680091  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:06.179473  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:06.680424  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:07.179668  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:07.680232  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:08.180357  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:08.679960  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:09.180406  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:09.679893  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:10.180470  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:10.680102  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:11.180344  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:11.679766  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:12.180348  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:12.679643  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:13.180121  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:13.679815  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:14.179492  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:14.679526  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:15.180454  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:15.679641  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:16.180481  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:16.679596  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:17.179991  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:17.680447  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:18.179814  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:18.679604  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:19.180037  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:19.680355  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:20.180349  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:20.679595  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:21.179952  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:21.680267  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:22.179901  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:22.680376  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:23.180156  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:23.679931  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:24.180000  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:24.680128  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:25.179481  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:25.680099  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:26.180243  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:26.680414  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:27.180290  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:27.680286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:28.179866  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:28.680103  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:29.180483  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:29.680117  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:30.179477  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:30.679634  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:31.180114  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:31.680389  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:32.179833  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:32.679848  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:33.180002  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:33.679520  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:34.180220  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:34.679624  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:35.179932  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:35.679910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:36.180365  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:36.679590  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:37.179548  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:37.680243  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:38.179674  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:38.680191  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:39.179865  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:39.680176  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:40.179534  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:40.679913  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:41.180457  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:41.679626  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:42.179639  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:42.679943  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:43.179573  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:43.680221  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:44.180342  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:44.679876  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:45.180254  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:45.679532  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:46.180286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:46.679433  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:47.179977  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:47.679540  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:48.180382  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:48.679912  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:48.679971  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:48.706989  109844 cri.go:89] found id: ""
	I1002 20:55:48.707014  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.707020  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:48.707025  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:48.707071  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:48.733283  109844 cri.go:89] found id: ""
	I1002 20:55:48.733299  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.733306  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:48.733311  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:48.733361  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:48.761228  109844 cri.go:89] found id: ""
	I1002 20:55:48.761245  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.761250  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:48.761256  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:48.761313  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:48.788501  109844 cri.go:89] found id: ""
	I1002 20:55:48.788516  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.788522  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:48.788527  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:48.788579  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:48.814616  109844 cri.go:89] found id: ""
	I1002 20:55:48.814636  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.814646  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:48.814651  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:48.814703  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:48.841518  109844 cri.go:89] found id: ""
	I1002 20:55:48.841538  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.841548  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:48.841555  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:48.841624  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:48.869254  109844 cri.go:89] found id: ""
	I1002 20:55:48.869278  109844 logs.go:282] 0 containers: []
	W1002 20:55:48.869288  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:48.869311  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:48.869335  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:48.883919  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:48.883937  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:48.941687  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:48.933979    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.935001    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.936618    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.937054    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.938614    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:48.933979    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.935001    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.936618    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.937054    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:48.938614    6702 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:48.941698  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:48.941710  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:49.007787  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:49.007810  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:49.038133  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:49.038157  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:51.609461  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:51.620229  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:51.620296  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:51.647003  109844 cri.go:89] found id: ""
	I1002 20:55:51.647022  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.647028  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:51.647033  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:51.647087  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:51.673376  109844 cri.go:89] found id: ""
	I1002 20:55:51.673394  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.673402  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:51.673408  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:51.673467  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:51.700685  109844 cri.go:89] found id: ""
	I1002 20:55:51.700701  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.700719  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:51.700724  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:51.700792  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:51.726660  109844 cri.go:89] found id: ""
	I1002 20:55:51.726677  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.726684  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:51.726689  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:51.726762  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:51.753630  109844 cri.go:89] found id: ""
	I1002 20:55:51.753646  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.753652  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:51.753657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:51.753750  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:51.779127  109844 cri.go:89] found id: ""
	I1002 20:55:51.779146  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.779155  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:51.779161  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:51.779235  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:51.805960  109844 cri.go:89] found id: ""
	I1002 20:55:51.805979  109844 logs.go:282] 0 containers: []
	W1002 20:55:51.805988  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:51.805997  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:51.806006  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:51.835916  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:51.835939  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:51.905127  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:51.905159  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:51.920189  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:51.920209  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:51.976010  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:51.969042    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.969686    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971200    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971624    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.973116    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:51.969042    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.969686    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971200    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.971624    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:51.973116    6845 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:51.976023  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:51.976035  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:54.539314  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:54.550248  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:54.550316  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:54.577239  109844 cri.go:89] found id: ""
	I1002 20:55:54.577254  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.577261  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:54.577265  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:54.577311  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:54.603907  109844 cri.go:89] found id: ""
	I1002 20:55:54.603927  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.603935  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:54.603941  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:54.603991  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:54.630527  109844 cri.go:89] found id: ""
	I1002 20:55:54.630543  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.630549  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:54.630562  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:54.630624  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:54.658661  109844 cri.go:89] found id: ""
	I1002 20:55:54.658680  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.658688  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:54.658693  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:54.658774  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:54.684747  109844 cri.go:89] found id: ""
	I1002 20:55:54.684769  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.684807  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:54.684814  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:54.684890  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:54.711715  109844 cri.go:89] found id: ""
	I1002 20:55:54.711732  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.711777  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:54.711785  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:54.711842  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:54.738961  109844 cri.go:89] found id: ""
	I1002 20:55:54.738979  109844 logs.go:282] 0 containers: []
	W1002 20:55:54.738987  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:54.738996  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:54.739009  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:54.806223  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:54.806250  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:54.820749  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:54.820771  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:54.877826  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:54.870974    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.871493    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873132    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873593    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.875041    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:54.870974    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.871493    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873132    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.873593    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:54.875041    6946 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:54.877845  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:54.877872  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:54.943126  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:54.943152  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:55:57.473420  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:55:57.484300  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:55:57.484350  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:55:57.510256  109844 cri.go:89] found id: ""
	I1002 20:55:57.510274  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.510281  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:55:57.510285  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:55:57.510350  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:55:57.536726  109844 cri.go:89] found id: ""
	I1002 20:55:57.536756  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.536766  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:55:57.536773  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:55:57.536824  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:55:57.562388  109844 cri.go:89] found id: ""
	I1002 20:55:57.562407  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.562416  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:55:57.562421  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:55:57.562467  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:55:57.589542  109844 cri.go:89] found id: ""
	I1002 20:55:57.589569  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.589577  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:55:57.589582  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:55:57.589647  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:55:57.616763  109844 cri.go:89] found id: ""
	I1002 20:55:57.616781  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.616790  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:55:57.616796  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:55:57.616842  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:55:57.642618  109844 cri.go:89] found id: ""
	I1002 20:55:57.642637  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.642646  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:55:57.642652  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:55:57.642700  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:55:57.668671  109844 cri.go:89] found id: ""
	I1002 20:55:57.668686  109844 logs.go:282] 0 containers: []
	W1002 20:55:57.668693  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:55:57.668700  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:55:57.668714  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:55:57.733001  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:55:57.733023  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:55:57.747314  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:55:57.747338  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:55:57.803286  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:55:57.796365    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.796951    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.798536    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.799065    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.800640    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:55:57.796365    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.796951    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.798536    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.799065    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:55:57.800640    7069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:55:57.803303  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:55:57.803316  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:55:57.869484  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:55:57.869515  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:00.399551  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:00.410170  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:00.410218  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:00.436280  109844 cri.go:89] found id: ""
	I1002 20:56:00.436299  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.436306  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:00.436313  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:00.436368  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:00.463444  109844 cri.go:89] found id: ""
	I1002 20:56:00.463461  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.463467  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:00.463479  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:00.463542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:00.489898  109844 cri.go:89] found id: ""
	I1002 20:56:00.489912  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.489919  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:00.489923  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:00.489970  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:00.516907  109844 cri.go:89] found id: ""
	I1002 20:56:00.516925  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.516932  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:00.516937  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:00.516987  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:00.543495  109844 cri.go:89] found id: ""
	I1002 20:56:00.543512  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.543519  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:00.543524  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:00.543575  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:00.569648  109844 cri.go:89] found id: ""
	I1002 20:56:00.569664  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.569670  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:00.569675  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:00.569722  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:00.596695  109844 cri.go:89] found id: ""
	I1002 20:56:00.596712  109844 logs.go:282] 0 containers: []
	W1002 20:56:00.596719  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:00.596726  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:00.596756  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:00.664900  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:00.664923  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:00.679401  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:00.679420  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:00.736278  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:00.729378    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.729909    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731467    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731953    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.733441    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:00.729378    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.729909    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731467    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.731953    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:00.733441    7188 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:00.736292  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:00.736302  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:00.801067  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:00.801089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:03.333225  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:03.344042  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:03.344094  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:03.370652  109844 cri.go:89] found id: ""
	I1002 20:56:03.370668  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.370675  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:03.370680  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:03.370749  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:03.398592  109844 cri.go:89] found id: ""
	I1002 20:56:03.398609  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.398616  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:03.398621  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:03.398675  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:03.425268  109844 cri.go:89] found id: ""
	I1002 20:56:03.425284  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.425292  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:03.425297  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:03.425348  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:03.451631  109844 cri.go:89] found id: ""
	I1002 20:56:03.451645  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.451651  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:03.451655  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:03.451713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:03.476703  109844 cri.go:89] found id: ""
	I1002 20:56:03.476718  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.476728  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:03.476748  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:03.476804  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:03.502825  109844 cri.go:89] found id: ""
	I1002 20:56:03.502840  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.502847  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:03.502852  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:03.502897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:03.530314  109844 cri.go:89] found id: ""
	I1002 20:56:03.530330  109844 logs.go:282] 0 containers: []
	W1002 20:56:03.530337  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:03.530345  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:03.530358  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:03.596281  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:03.596307  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:03.611117  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:03.611135  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:03.669231  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:03.661298    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.661803    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.663484    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.664056    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.665688    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:03.661298    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.661803    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.663484    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.664056    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:03.665688    7308 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:03.669243  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:03.669254  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:03.735723  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:03.735761  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:06.266853  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:06.278118  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:06.278167  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:06.304229  109844 cri.go:89] found id: ""
	I1002 20:56:06.304246  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.304252  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:06.304258  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:06.304314  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:06.331492  109844 cri.go:89] found id: ""
	I1002 20:56:06.331510  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.331517  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:06.331522  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:06.331574  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:06.357300  109844 cri.go:89] found id: ""
	I1002 20:56:06.357319  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.357328  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:06.357333  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:06.357381  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:06.385072  109844 cri.go:89] found id: ""
	I1002 20:56:06.385092  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.385101  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:06.385107  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:06.385170  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:06.412479  109844 cri.go:89] found id: ""
	I1002 20:56:06.412499  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.412509  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:06.412516  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:06.412571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:06.439019  109844 cri.go:89] found id: ""
	I1002 20:56:06.439035  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.439042  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:06.439049  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:06.439105  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:06.466228  109844 cri.go:89] found id: ""
	I1002 20:56:06.466244  109844 logs.go:282] 0 containers: []
	W1002 20:56:06.466250  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:06.466257  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:06.466268  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:06.530972  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:06.530997  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:06.546016  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:06.546039  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:06.604192  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:06.597141    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.597599    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.599321    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.600026    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.601244    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:06.597141    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.597599    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.599321    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.600026    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:06.601244    7441 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:06.604215  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:06.604226  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:06.668313  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:06.668341  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:09.199470  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:09.210902  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:09.210947  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:09.237464  109844 cri.go:89] found id: ""
	I1002 20:56:09.237481  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.237488  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:09.237503  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:09.237549  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:09.264849  109844 cri.go:89] found id: ""
	I1002 20:56:09.264868  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.264876  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:09.264884  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:09.264944  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:09.291066  109844 cri.go:89] found id: ""
	I1002 20:56:09.291083  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.291088  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:09.291094  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:09.291141  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:09.316972  109844 cri.go:89] found id: ""
	I1002 20:56:09.316991  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.317001  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:09.317008  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:09.317066  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:09.342462  109844 cri.go:89] found id: ""
	I1002 20:56:09.342479  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.342488  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:09.342494  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:09.342560  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:09.369344  109844 cri.go:89] found id: ""
	I1002 20:56:09.369361  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.369370  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:09.369377  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:09.369431  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:09.396279  109844 cri.go:89] found id: ""
	I1002 20:56:09.396295  109844 logs.go:282] 0 containers: []
	W1002 20:56:09.396301  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:09.396309  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:09.396325  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:09.462471  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:09.462495  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:09.477360  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:09.477379  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:09.533977  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:09.526956    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.527598    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529217    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529656    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.531136    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:09.526956    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.527598    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529217    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.529656    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:09.531136    7557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:09.533991  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:09.534001  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:09.597829  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:09.597856  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:12.129375  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:12.140711  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:12.140778  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:12.167268  109844 cri.go:89] found id: ""
	I1002 20:56:12.167287  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.167295  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:12.167301  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:12.167351  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:12.193605  109844 cri.go:89] found id: ""
	I1002 20:56:12.193620  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.193625  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:12.193630  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:12.193674  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:12.220258  109844 cri.go:89] found id: ""
	I1002 20:56:12.220272  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.220279  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:12.220284  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:12.220342  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:12.246824  109844 cri.go:89] found id: ""
	I1002 20:56:12.246839  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.246845  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:12.246849  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:12.246897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:12.273611  109844 cri.go:89] found id: ""
	I1002 20:56:12.273631  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.273639  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:12.273647  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:12.273708  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:12.300838  109844 cri.go:89] found id: ""
	I1002 20:56:12.300856  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.300862  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:12.300868  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:12.300916  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:12.328414  109844 cri.go:89] found id: ""
	I1002 20:56:12.328429  109844 logs.go:282] 0 containers: []
	W1002 20:56:12.328435  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:12.328442  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:12.328453  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:12.397603  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:12.397628  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:12.412076  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:12.412093  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:12.469369  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:12.462192    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.462709    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464313    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464791    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.466331    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:12.462192    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.462709    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464313    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.464791    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:12.466331    7682 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:12.469384  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:12.469399  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:12.530104  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:12.530130  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:15.060450  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:15.071089  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:15.071138  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:15.097730  109844 cri.go:89] found id: ""
	I1002 20:56:15.097766  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.097774  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:15.097783  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:15.097832  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:15.123349  109844 cri.go:89] found id: ""
	I1002 20:56:15.123366  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.123376  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:15.123382  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:15.123445  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:15.149644  109844 cri.go:89] found id: ""
	I1002 20:56:15.149659  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.149665  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:15.149670  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:15.149717  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:15.175442  109844 cri.go:89] found id: ""
	I1002 20:56:15.175464  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.175473  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:15.175480  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:15.175534  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:15.200859  109844 cri.go:89] found id: ""
	I1002 20:56:15.200875  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.200881  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:15.200886  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:15.200931  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:15.226770  109844 cri.go:89] found id: ""
	I1002 20:56:15.226786  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.226792  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:15.226797  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:15.226857  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:15.252444  109844 cri.go:89] found id: ""
	I1002 20:56:15.252462  109844 logs.go:282] 0 containers: []
	W1002 20:56:15.252472  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:15.252480  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:15.252493  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:15.281148  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:15.281166  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:15.350382  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:15.350406  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:15.365144  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:15.365163  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:15.421764  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:15.414607    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.415162    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.416781    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.417290    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.418840    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:15.414607    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.415162    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.416781    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.417290    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:15.418840    7815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:15.421789  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:15.421802  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:17.982382  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:17.992951  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:17.992999  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:18.018834  109844 cri.go:89] found id: ""
	I1002 20:56:18.018853  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.018862  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:18.018869  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:18.018923  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:18.045169  109844 cri.go:89] found id: ""
	I1002 20:56:18.045186  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.045192  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:18.045196  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:18.045245  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:18.071187  109844 cri.go:89] found id: ""
	I1002 20:56:18.071202  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.071209  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:18.071213  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:18.071263  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:18.099002  109844 cri.go:89] found id: ""
	I1002 20:56:18.099021  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.099031  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:18.099037  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:18.099086  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:18.124458  109844 cri.go:89] found id: ""
	I1002 20:56:18.124474  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.124481  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:18.124486  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:18.124532  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:18.151052  109844 cri.go:89] found id: ""
	I1002 20:56:18.151070  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.151078  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:18.151086  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:18.151147  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:18.177070  109844 cri.go:89] found id: ""
	I1002 20:56:18.177088  109844 logs.go:282] 0 containers: []
	W1002 20:56:18.177097  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:18.177106  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:18.177120  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:18.245531  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:18.245551  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:18.259536  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:18.259555  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:18.315828  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:18.309110    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.309608    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311154    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311572    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.313080    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:18.309110    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.309608    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311154    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.311572    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:18.313080    7931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:18.315838  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:18.315849  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:18.378894  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:18.378917  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:20.910289  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:20.921508  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:20.921565  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:20.949001  109844 cri.go:89] found id: ""
	I1002 20:56:20.949015  109844 logs.go:282] 0 containers: []
	W1002 20:56:20.949022  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:20.949027  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:20.949073  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:20.975236  109844 cri.go:89] found id: ""
	I1002 20:56:20.975253  109844 logs.go:282] 0 containers: []
	W1002 20:56:20.975259  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:20.975264  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:20.975310  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:21.002161  109844 cri.go:89] found id: ""
	I1002 20:56:21.002176  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.002183  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:21.002188  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:21.002236  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:21.029183  109844 cri.go:89] found id: ""
	I1002 20:56:21.029203  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.029211  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:21.029218  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:21.029291  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:21.056171  109844 cri.go:89] found id: ""
	I1002 20:56:21.056187  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.056193  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:21.056198  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:21.056248  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:21.083782  109844 cri.go:89] found id: ""
	I1002 20:56:21.083801  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.083810  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:21.083817  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:21.083873  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:21.110480  109844 cri.go:89] found id: ""
	I1002 20:56:21.110496  109844 logs.go:282] 0 containers: []
	W1002 20:56:21.110503  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:21.110512  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:21.110526  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:21.178200  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:21.178224  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:21.192348  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:21.192367  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:21.248832  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:21.241470    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.242149    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.243832    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.244309    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.245873    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:21.241470    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.242149    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.243832    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.244309    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:21.245873    8053 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:21.248843  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:21.248866  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:21.313859  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:21.313939  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:23.844485  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:23.855704  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:23.855785  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:23.881987  109844 cri.go:89] found id: ""
	I1002 20:56:23.882003  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.882009  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:23.882014  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:23.882058  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:23.908092  109844 cri.go:89] found id: ""
	I1002 20:56:23.908109  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.908115  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:23.908121  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:23.908175  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:23.933489  109844 cri.go:89] found id: ""
	I1002 20:56:23.933503  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.933509  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:23.933514  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:23.933560  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:23.958962  109844 cri.go:89] found id: ""
	I1002 20:56:23.958978  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.958985  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:23.958991  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:23.959039  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:23.985206  109844 cri.go:89] found id: ""
	I1002 20:56:23.985222  109844 logs.go:282] 0 containers: []
	W1002 20:56:23.985231  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:23.985237  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:23.985298  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:24.011436  109844 cri.go:89] found id: ""
	I1002 20:56:24.011453  109844 logs.go:282] 0 containers: []
	W1002 20:56:24.011460  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:24.011465  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:24.011512  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:24.036401  109844 cri.go:89] found id: ""
	I1002 20:56:24.036417  109844 logs.go:282] 0 containers: []
	W1002 20:56:24.036423  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:24.036431  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:24.036447  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:24.050446  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:24.050465  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:24.105883  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:24.099062    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.099587    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101050    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101530    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.103091    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:24.099062    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.099587    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101050    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.101530    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:24.103091    8176 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:24.105896  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:24.105906  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:24.165660  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:24.165683  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:24.194659  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:24.194677  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:26.765857  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:26.776723  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:26.776795  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:26.803878  109844 cri.go:89] found id: ""
	I1002 20:56:26.803894  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.803901  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:26.803906  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:26.803960  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:26.828926  109844 cri.go:89] found id: ""
	I1002 20:56:26.828944  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.828950  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:26.828955  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:26.829002  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:26.854812  109844 cri.go:89] found id: ""
	I1002 20:56:26.854828  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.854834  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:26.854840  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:26.854887  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:26.881665  109844 cri.go:89] found id: ""
	I1002 20:56:26.881682  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.881688  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:26.881693  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:26.881763  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:26.909265  109844 cri.go:89] found id: ""
	I1002 20:56:26.909284  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.909294  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:26.909301  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:26.909355  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:26.935117  109844 cri.go:89] found id: ""
	I1002 20:56:26.935133  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.935139  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:26.935144  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:26.935200  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:26.961377  109844 cri.go:89] found id: ""
	I1002 20:56:26.961392  109844 logs.go:282] 0 containers: []
	W1002 20:56:26.961399  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:26.961406  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:26.961417  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:26.989187  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:26.989204  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:27.056354  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:27.056379  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:27.070926  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:27.070944  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:27.127442  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:27.119650    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.120189    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.122490    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.123013    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.124580    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:27.119650    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.120189    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.122490    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.123013    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:27.124580    8307 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:27.127456  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:27.127473  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:29.687547  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:29.698733  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:29.698810  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:29.724706  109844 cri.go:89] found id: ""
	I1002 20:56:29.724721  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.724727  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:29.724732  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:29.724794  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:29.752274  109844 cri.go:89] found id: ""
	I1002 20:56:29.752291  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.752297  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:29.752308  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:29.752369  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:29.778792  109844 cri.go:89] found id: ""
	I1002 20:56:29.778807  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.778813  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:29.778818  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:29.778867  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:29.804447  109844 cri.go:89] found id: ""
	I1002 20:56:29.804468  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.804485  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:29.804490  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:29.804540  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:29.830280  109844 cri.go:89] found id: ""
	I1002 20:56:29.830301  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.830310  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:29.830316  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:29.830375  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:29.855193  109844 cri.go:89] found id: ""
	I1002 20:56:29.855209  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.855215  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:29.855220  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:29.855270  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:29.881092  109844 cri.go:89] found id: ""
	I1002 20:56:29.881107  109844 logs.go:282] 0 containers: []
	W1002 20:56:29.881114  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:29.881122  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:29.881132  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:29.948531  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:29.948565  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:29.962996  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:29.963015  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:30.019733  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:30.012437    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.013106    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.014710    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.015163    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.016849    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:30.012437    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.013106    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.014710    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.015163    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:30.016849    8426 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:30.019769  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:30.019784  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:30.080302  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:30.080332  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:32.612620  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:32.623619  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:32.623669  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:32.649868  109844 cri.go:89] found id: ""
	I1002 20:56:32.649884  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.649890  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:32.649895  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:32.649947  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:32.676993  109844 cri.go:89] found id: ""
	I1002 20:56:32.677011  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.677020  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:32.677026  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:32.677084  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:32.703005  109844 cri.go:89] found id: ""
	I1002 20:56:32.703026  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.703036  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:32.703042  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:32.703105  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:32.728641  109844 cri.go:89] found id: ""
	I1002 20:56:32.728657  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.728663  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:32.728668  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:32.728716  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:32.754904  109844 cri.go:89] found id: ""
	I1002 20:56:32.754922  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.754931  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:32.754938  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:32.754996  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:32.780607  109844 cri.go:89] found id: ""
	I1002 20:56:32.780623  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.780632  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:32.780638  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:32.780700  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:32.805534  109844 cri.go:89] found id: ""
	I1002 20:56:32.805549  109844 logs.go:282] 0 containers: []
	W1002 20:56:32.805555  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:32.805564  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:32.805575  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:32.871168  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:32.871190  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:32.885484  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:32.885503  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:32.942338  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:32.935227    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.935814    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937470    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937975    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.939512    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:32.935227    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.935814    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937470    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.937975    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:32.939512    8545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:32.942348  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:32.942361  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:33.006822  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:33.006849  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:35.539700  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:35.550793  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:35.550843  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:35.577123  109844 cri.go:89] found id: ""
	I1002 20:56:35.577141  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.577152  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:35.577158  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:35.577205  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:35.603414  109844 cri.go:89] found id: ""
	I1002 20:56:35.603429  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.603435  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:35.603440  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:35.603487  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:35.630119  109844 cri.go:89] found id: ""
	I1002 20:56:35.630139  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.630151  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:35.630161  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:35.630216  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:35.656385  109844 cri.go:89] found id: ""
	I1002 20:56:35.656400  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.656406  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:35.656410  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:35.656461  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:35.683092  109844 cri.go:89] found id: ""
	I1002 20:56:35.683109  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.683117  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:35.683121  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:35.683168  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:35.709629  109844 cri.go:89] found id: ""
	I1002 20:56:35.709644  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.709651  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:35.709657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:35.709713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:35.737006  109844 cri.go:89] found id: ""
	I1002 20:56:35.737025  109844 logs.go:282] 0 containers: []
	W1002 20:56:35.737035  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:35.737043  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:35.737054  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:35.767533  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:35.767556  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:35.833953  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:35.833980  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:35.848818  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:35.848839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:35.906998  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:35.899806    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.900358    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.901937    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.902434    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.903965    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:35.899806    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.900358    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.901937    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.902434    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:35.903965    8683 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:35.907011  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:35.907024  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:38.471319  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:38.481958  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:38.482010  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:38.507711  109844 cri.go:89] found id: ""
	I1002 20:56:38.507730  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.507751  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:38.507758  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:38.507820  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:38.534015  109844 cri.go:89] found id: ""
	I1002 20:56:38.534033  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.534039  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:38.534045  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:38.534096  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:38.561341  109844 cri.go:89] found id: ""
	I1002 20:56:38.561358  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.561367  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:38.561373  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:38.561433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:38.587872  109844 cri.go:89] found id: ""
	I1002 20:56:38.587891  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.587901  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:38.587907  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:38.587973  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:38.612399  109844 cri.go:89] found id: ""
	I1002 20:56:38.612418  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.612427  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:38.612433  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:38.612480  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:38.639104  109844 cri.go:89] found id: ""
	I1002 20:56:38.639120  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.639127  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:38.639132  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:38.639190  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:38.667322  109844 cri.go:89] found id: ""
	I1002 20:56:38.667339  109844 logs.go:282] 0 containers: []
	W1002 20:56:38.667345  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:38.667352  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:38.667363  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:38.682168  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:38.682187  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:38.740651  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:38.733357    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.733969    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.735590    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.736050    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.737649    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:38.733357    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.733969    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.735590    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.736050    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:38.737649    8784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:38.740663  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:38.740674  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:38.805774  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:38.805798  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:38.835944  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:38.835962  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:41.406460  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:41.417553  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:41.417620  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:41.444684  109844 cri.go:89] found id: ""
	I1002 20:56:41.444698  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.444705  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:41.444710  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:41.444781  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:41.471352  109844 cri.go:89] found id: ""
	I1002 20:56:41.471370  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.471382  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:41.471390  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:41.471442  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:41.498686  109844 cri.go:89] found id: ""
	I1002 20:56:41.498702  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.498709  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:41.498714  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:41.498785  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:41.524449  109844 cri.go:89] found id: ""
	I1002 20:56:41.524463  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.524469  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:41.524478  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:41.524531  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:41.551827  109844 cri.go:89] found id: ""
	I1002 20:56:41.551845  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.551857  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:41.551864  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:41.551913  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:41.577898  109844 cri.go:89] found id: ""
	I1002 20:56:41.577918  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.577927  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:41.577933  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:41.577989  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:41.604237  109844 cri.go:89] found id: ""
	I1002 20:56:41.604254  109844 logs.go:282] 0 containers: []
	W1002 20:56:41.604261  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:41.604270  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:41.604290  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:41.675907  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:41.675931  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:41.690491  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:41.690509  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:41.749157  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:41.742425    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.742947    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.744615    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.745122    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.746195    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:41.742425    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.742947    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.744615    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.745122    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:41.746195    8916 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:41.749169  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:41.749184  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:41.815715  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:41.815751  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:44.347532  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:44.358694  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:44.358755  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:44.385917  109844 cri.go:89] found id: ""
	I1002 20:56:44.385932  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.385941  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:44.385946  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:44.385992  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:44.412267  109844 cri.go:89] found id: ""
	I1002 20:56:44.412283  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.412289  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:44.412293  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:44.412344  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:44.439227  109844 cri.go:89] found id: ""
	I1002 20:56:44.439242  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.439249  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:44.439253  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:44.439298  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:44.465395  109844 cri.go:89] found id: ""
	I1002 20:56:44.465411  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.465418  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:44.465423  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:44.465473  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:44.491435  109844 cri.go:89] found id: ""
	I1002 20:56:44.491452  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.491457  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:44.491462  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:44.491508  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:44.517875  109844 cri.go:89] found id: ""
	I1002 20:56:44.517892  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.517899  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:44.517904  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:44.517956  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:44.544412  109844 cri.go:89] found id: ""
	I1002 20:56:44.544428  109844 logs.go:282] 0 containers: []
	W1002 20:56:44.544435  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:44.544443  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:44.544454  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:44.558619  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:44.558637  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:44.615090  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:44.608024    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.608566    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610178    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610634    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.612155    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:44.608024    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.608566    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610178    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.610634    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:44.612155    9036 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:44.615103  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:44.615115  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:44.675486  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:44.675509  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:44.704835  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:44.704853  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:47.280286  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:47.291478  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:47.291529  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:47.318560  109844 cri.go:89] found id: ""
	I1002 20:56:47.318581  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.318586  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:47.318594  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:47.318648  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:47.344455  109844 cri.go:89] found id: ""
	I1002 20:56:47.344471  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.344477  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:47.344482  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:47.344527  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:47.370437  109844 cri.go:89] found id: ""
	I1002 20:56:47.370452  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.370458  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:47.370464  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:47.370532  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:47.396657  109844 cri.go:89] found id: ""
	I1002 20:56:47.396672  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.396678  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:47.396682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:47.396751  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:47.422143  109844 cri.go:89] found id: ""
	I1002 20:56:47.422166  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.422172  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:47.422178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:47.422230  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:47.447815  109844 cri.go:89] found id: ""
	I1002 20:56:47.447835  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.447844  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:47.447851  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:47.447910  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:47.473476  109844 cri.go:89] found id: ""
	I1002 20:56:47.473491  109844 logs.go:282] 0 containers: []
	W1002 20:56:47.473498  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:47.473514  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:47.473528  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:47.487700  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:47.487722  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:47.544344  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:47.537160    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.537816    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539394    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539878    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.541420    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:47.537160    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.537816    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539394    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.539878    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:47.541420    9158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:47.544360  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:47.544370  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:47.605987  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:47.606010  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:47.634796  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:47.634815  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:50.205345  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:50.216795  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:50.216856  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:50.242490  109844 cri.go:89] found id: ""
	I1002 20:56:50.242507  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.242516  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:50.242523  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:50.242599  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:50.269384  109844 cri.go:89] found id: ""
	I1002 20:56:50.269399  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.269405  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:50.269410  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:50.269455  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:50.294810  109844 cri.go:89] found id: ""
	I1002 20:56:50.294830  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.294839  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:50.294847  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:50.294900  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:50.321301  109844 cri.go:89] found id: ""
	I1002 20:56:50.321330  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.321339  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:50.321345  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:50.321396  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:50.348435  109844 cri.go:89] found id: ""
	I1002 20:56:50.348454  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.348463  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:50.348470  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:50.348521  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:50.375520  109844 cri.go:89] found id: ""
	I1002 20:56:50.375537  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.375544  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:50.375550  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:50.375612  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:50.401919  109844 cri.go:89] found id: ""
	I1002 20:56:50.401935  109844 logs.go:282] 0 containers: []
	W1002 20:56:50.401941  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:50.401949  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:50.401960  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:50.474853  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:50.474878  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:50.489483  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:50.489502  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:50.546358  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:50.539620    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.540253    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.541729    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.542224    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.543673    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:50.539620    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.540253    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.541729    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.542224    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:50.543673    9278 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:50.546371  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:50.546387  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:50.612342  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:50.612365  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:53.143229  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:53.154347  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:53.154399  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:53.179697  109844 cri.go:89] found id: ""
	I1002 20:56:53.179714  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.179722  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:53.179727  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:53.179796  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:53.206078  109844 cri.go:89] found id: ""
	I1002 20:56:53.206094  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.206102  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:53.206107  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:53.206161  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:53.232905  109844 cri.go:89] found id: ""
	I1002 20:56:53.232920  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.232929  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:53.232935  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:53.232990  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:53.258881  109844 cri.go:89] found id: ""
	I1002 20:56:53.258897  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.258903  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:53.258908  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:53.259002  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:53.286005  109844 cri.go:89] found id: ""
	I1002 20:56:53.286020  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.286026  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:53.286031  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:53.286077  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:53.311544  109844 cri.go:89] found id: ""
	I1002 20:56:53.311562  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.311572  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:53.311579  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:53.311642  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:53.338344  109844 cri.go:89] found id: ""
	I1002 20:56:53.338360  109844 logs.go:282] 0 containers: []
	W1002 20:56:53.338366  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:53.338375  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:53.338391  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:53.394654  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:53.387661    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.388633    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.389809    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.390172    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.391803    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:53.387661    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.388633    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.389809    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.390172    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:53.391803    9400 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:53.394666  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:53.394676  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:53.457101  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:53.457125  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:53.487445  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:53.487464  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:53.560767  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:53.560788  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:56.077698  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:56.088607  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:56.088653  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:56.115831  109844 cri.go:89] found id: ""
	I1002 20:56:56.115851  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.115860  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:56.115873  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:56.115930  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:56.143933  109844 cri.go:89] found id: ""
	I1002 20:56:56.143951  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.143960  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:56.143966  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:56.144013  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:56.170959  109844 cri.go:89] found id: ""
	I1002 20:56:56.170976  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.170983  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:56.170987  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:56.171041  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:56.198476  109844 cri.go:89] found id: ""
	I1002 20:56:56.198493  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.198502  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:56.198507  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:56.198553  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:56.225118  109844 cri.go:89] found id: ""
	I1002 20:56:56.225136  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.225144  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:56.225151  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:56.225203  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:56.250695  109844 cri.go:89] found id: ""
	I1002 20:56:56.250712  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.250719  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:56.250724  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:56.250798  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:56.277912  109844 cri.go:89] found id: ""
	I1002 20:56:56.277927  109844 logs.go:282] 0 containers: []
	W1002 20:56:56.277933  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:56.277939  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:56.277949  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:56.348703  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:56.348726  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:56.363669  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:56.363691  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:56.421487  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:56.414561    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.415193    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.416833    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.417344    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.418421    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:56.414561    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.415193    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.416833    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.417344    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:56.418421    9538 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:56:56.421501  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:56.421512  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:56.486234  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:56.486258  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:59.016061  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:56:59.027120  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:56:59.027174  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:56:59.055077  109844 cri.go:89] found id: ""
	I1002 20:56:59.055094  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.055100  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:56:59.055105  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:56:59.055154  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:56:59.080243  109844 cri.go:89] found id: ""
	I1002 20:56:59.080260  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.080267  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:56:59.080272  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:56:59.080321  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:56:59.105555  109844 cri.go:89] found id: ""
	I1002 20:56:59.105573  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.105582  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:56:59.105588  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:56:59.105643  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:56:59.131895  109844 cri.go:89] found id: ""
	I1002 20:56:59.131911  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.131918  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:56:59.131923  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:56:59.131971  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:56:59.158699  109844 cri.go:89] found id: ""
	I1002 20:56:59.158716  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.158724  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:56:59.158731  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:56:59.158813  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:56:59.184528  109844 cri.go:89] found id: ""
	I1002 20:56:59.184547  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.184553  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:56:59.184558  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:56:59.184621  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:56:59.210382  109844 cri.go:89] found id: ""
	I1002 20:56:59.210398  109844 logs.go:282] 0 containers: []
	W1002 20:56:59.210406  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:56:59.210415  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:56:59.210435  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:56:59.274026  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:56:59.274049  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:56:59.303182  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:56:59.303199  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:56:59.372421  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:56:59.372446  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:56:59.388344  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:56:59.388367  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:56:59.449053  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:56:59.441943    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.442636    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.443715    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.444268    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.445829    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:56:59.441943    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.442636    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.443715    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.444268    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:56:59.445829    9678 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:01.950787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:01.962421  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:01.962505  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:01.990756  109844 cri.go:89] found id: ""
	I1002 20:57:01.990774  109844 logs.go:282] 0 containers: []
	W1002 20:57:01.990781  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:01.990786  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:01.990835  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:02.018452  109844 cri.go:89] found id: ""
	I1002 20:57:02.018471  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.018480  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:02.018485  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:02.018568  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:02.046456  109844 cri.go:89] found id: ""
	I1002 20:57:02.046474  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.046481  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:02.046485  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:02.046549  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:02.074761  109844 cri.go:89] found id: ""
	I1002 20:57:02.074781  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.074794  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:02.074799  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:02.074859  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:02.102891  109844 cri.go:89] found id: ""
	I1002 20:57:02.102910  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.102919  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:02.102926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:02.102986  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:02.129478  109844 cri.go:89] found id: ""
	I1002 20:57:02.129496  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.129503  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:02.129509  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:02.129571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:02.157911  109844 cri.go:89] found id: ""
	I1002 20:57:02.157927  109844 logs.go:282] 0 containers: []
	W1002 20:57:02.157934  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:02.157941  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:02.157954  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:02.216970  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:02.209199    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.209824    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211437    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211932    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.213815    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:02.209199    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.209824    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211437    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.211932    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:02.213815    9772 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:02.216979  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:02.216990  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:02.280811  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:02.280839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:02.310062  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:02.310084  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:02.379511  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:02.379536  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:04.894910  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:04.906215  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:04.906297  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:04.934307  109844 cri.go:89] found id: ""
	I1002 20:57:04.934323  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.934330  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:04.934335  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:04.934388  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:04.961709  109844 cri.go:89] found id: ""
	I1002 20:57:04.961725  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.961731  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:04.961751  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:04.961803  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:04.988103  109844 cri.go:89] found id: ""
	I1002 20:57:04.988123  109844 logs.go:282] 0 containers: []
	W1002 20:57:04.988134  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:04.988141  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:04.988204  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:05.015267  109844 cri.go:89] found id: ""
	I1002 20:57:05.015282  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.015293  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:05.015298  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:05.015347  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:05.042563  109844 cri.go:89] found id: ""
	I1002 20:57:05.042585  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.042592  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:05.042597  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:05.042648  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:05.070337  109844 cri.go:89] found id: ""
	I1002 20:57:05.070356  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.070365  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:05.070372  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:05.070426  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:05.096592  109844 cri.go:89] found id: ""
	I1002 20:57:05.096607  109844 logs.go:282] 0 containers: []
	W1002 20:57:05.096613  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:05.096622  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:05.096635  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:05.169506  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:05.169529  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:05.184432  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:05.184452  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:05.241625  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:05.234636    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.235167    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.236774    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.237205    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.238801    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:05.234636    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.235167    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.236774    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.237205    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:05.238801    9907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:05.241643  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:05.241657  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:05.304652  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:05.304675  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:07.835766  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:07.847178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:07.847237  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:07.873351  109844 cri.go:89] found id: ""
	I1002 20:57:07.873370  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.873380  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:07.873387  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:07.873457  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:07.900684  109844 cri.go:89] found id: ""
	I1002 20:57:07.900700  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.900707  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:07.900713  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:07.900792  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:07.928661  109844 cri.go:89] found id: ""
	I1002 20:57:07.928677  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.928686  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:07.928692  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:07.928763  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:07.954556  109844 cri.go:89] found id: ""
	I1002 20:57:07.954573  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.954583  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:07.954589  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:07.954657  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:07.982818  109844 cri.go:89] found id: ""
	I1002 20:57:07.982833  109844 logs.go:282] 0 containers: []
	W1002 20:57:07.982839  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:07.982845  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:07.982903  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:08.010107  109844 cri.go:89] found id: ""
	I1002 20:57:08.010123  109844 logs.go:282] 0 containers: []
	W1002 20:57:08.010129  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:08.010134  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:08.010183  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:08.037125  109844 cri.go:89] found id: ""
	I1002 20:57:08.037142  109844 logs.go:282] 0 containers: []
	W1002 20:57:08.037150  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:08.037157  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:08.037166  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:08.096417  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:08.096440  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:08.126218  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:08.126239  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:08.194545  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:08.194571  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:08.210281  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:08.210304  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:08.266772  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:08.260009   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.260455   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262035   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262436   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.264034   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:08.260009   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.260455   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262035   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.262436   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:08.264034   10045 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:10.768500  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:10.779701  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:10.779778  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:10.806553  109844 cri.go:89] found id: ""
	I1002 20:57:10.806570  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.806578  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:10.806583  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:10.806628  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:10.831907  109844 cri.go:89] found id: ""
	I1002 20:57:10.831921  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.831938  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:10.831942  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:10.831987  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:10.858755  109844 cri.go:89] found id: ""
	I1002 20:57:10.858773  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.858781  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:10.858786  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:10.858844  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:10.886428  109844 cri.go:89] found id: ""
	I1002 20:57:10.886451  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.886460  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:10.886467  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:10.886528  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:10.912297  109844 cri.go:89] found id: ""
	I1002 20:57:10.912336  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.912344  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:10.912351  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:10.912405  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:10.939017  109844 cri.go:89] found id: ""
	I1002 20:57:10.939037  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.939043  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:10.939050  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:10.939112  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:10.964149  109844 cri.go:89] found id: ""
	I1002 20:57:10.964166  109844 logs.go:282] 0 containers: []
	W1002 20:57:10.964173  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:10.964181  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:10.964192  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:11.035194  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:11.035220  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:11.050083  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:11.050103  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:11.107489  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:11.100162   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.100777   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102350   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102866   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.104475   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:11.100162   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.100777   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102350   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.102866   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:11.104475   10152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:11.107508  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:11.107525  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:11.168024  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:11.168048  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:13.699241  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:13.709921  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:13.709982  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:13.735975  109844 cri.go:89] found id: ""
	I1002 20:57:13.735994  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.736004  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:13.736010  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:13.736059  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:13.762999  109844 cri.go:89] found id: ""
	I1002 20:57:13.763017  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.763024  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:13.763029  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:13.763082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:13.790647  109844 cri.go:89] found id: ""
	I1002 20:57:13.790667  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.790676  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:13.790682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:13.790753  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:13.816587  109844 cri.go:89] found id: ""
	I1002 20:57:13.816607  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.816617  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:13.816623  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:13.816688  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:13.842814  109844 cri.go:89] found id: ""
	I1002 20:57:13.842829  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.842836  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:13.842841  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:13.842891  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:13.868241  109844 cri.go:89] found id: ""
	I1002 20:57:13.868260  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.868269  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:13.868275  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:13.868327  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:13.895111  109844 cri.go:89] found id: ""
	I1002 20:57:13.895128  109844 logs.go:282] 0 containers: []
	W1002 20:57:13.895138  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:13.895147  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:13.895158  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:13.962125  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:13.962150  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:13.976779  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:13.976795  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:14.033771  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:14.027040   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.027554   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029207   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029659   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.031092   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:14.027040   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.027554   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029207   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.029659   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:14.031092   10276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:14.033782  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:14.033792  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:14.097410  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:14.097434  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:16.629753  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:16.640873  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:16.640931  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:16.668538  109844 cri.go:89] found id: ""
	I1002 20:57:16.668557  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.668568  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:16.668574  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:16.668633  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:16.697564  109844 cri.go:89] found id: ""
	I1002 20:57:16.697595  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.697605  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:16.697612  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:16.697666  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:16.725228  109844 cri.go:89] found id: ""
	I1002 20:57:16.725242  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.725248  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:16.725253  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:16.725297  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:16.750995  109844 cri.go:89] found id: ""
	I1002 20:57:16.751010  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.751017  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:16.751022  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:16.751066  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:16.777779  109844 cri.go:89] found id: ""
	I1002 20:57:16.777796  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.777803  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:16.777809  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:16.777869  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:16.803504  109844 cri.go:89] found id: ""
	I1002 20:57:16.803521  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.803527  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:16.803532  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:16.803593  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:16.830272  109844 cri.go:89] found id: ""
	I1002 20:57:16.830287  109844 logs.go:282] 0 containers: []
	W1002 20:57:16.830294  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:16.830302  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:16.830313  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:16.902383  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:16.902407  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:16.917396  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:16.917415  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:16.974693  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:16.966376   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.966932   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.968658   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.969953   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.970548   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:16.966376   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.966932   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.968658   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.969953   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:16.970548   10407 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:16.974702  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:16.974713  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:17.035157  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:17.035179  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:19.566417  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:19.577676  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:19.577746  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:19.604005  109844 cri.go:89] found id: ""
	I1002 20:57:19.604021  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.604027  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:19.604032  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:19.604080  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:19.631397  109844 cri.go:89] found id: ""
	I1002 20:57:19.631415  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.631423  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:19.631433  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:19.631486  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:19.657474  109844 cri.go:89] found id: ""
	I1002 20:57:19.657491  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.657498  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:19.657502  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:19.657550  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:19.683165  109844 cri.go:89] found id: ""
	I1002 20:57:19.683183  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.683240  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:19.683248  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:19.683303  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:19.709607  109844 cri.go:89] found id: ""
	I1002 20:57:19.709623  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.709629  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:19.709634  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:19.709681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:19.736310  109844 cri.go:89] found id: ""
	I1002 20:57:19.736326  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.736333  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:19.736338  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:19.736388  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:19.763087  109844 cri.go:89] found id: ""
	I1002 20:57:19.763103  109844 logs.go:282] 0 containers: []
	W1002 20:57:19.763109  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:19.763117  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:19.763130  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:19.777545  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:19.777563  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:19.835265  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:19.828219   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.828825   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830398   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830870   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.832345   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:19.828219   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.828825   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830398   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.830870   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:19.832345   10531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:19.835276  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:19.835288  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:19.900559  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:19.900584  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:19.929602  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:19.929620  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:22.502944  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:22.514059  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:22.514108  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:22.540127  109844 cri.go:89] found id: ""
	I1002 20:57:22.540144  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.540152  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:22.540158  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:22.540229  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:22.566906  109844 cri.go:89] found id: ""
	I1002 20:57:22.566920  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.566929  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:22.566936  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:22.566988  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:22.593141  109844 cri.go:89] found id: ""
	I1002 20:57:22.593160  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.593170  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:22.593178  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:22.593258  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:22.617379  109844 cri.go:89] found id: ""
	I1002 20:57:22.617395  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.617403  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:22.617408  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:22.617482  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:22.642997  109844 cri.go:89] found id: ""
	I1002 20:57:22.643015  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.643023  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:22.643030  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:22.643088  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:22.669891  109844 cri.go:89] found id: ""
	I1002 20:57:22.669910  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.669918  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:22.669925  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:22.669979  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:22.698537  109844 cri.go:89] found id: ""
	I1002 20:57:22.698553  109844 logs.go:282] 0 containers: []
	W1002 20:57:22.698559  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:22.698571  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:22.698582  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:22.764795  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:22.764818  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:22.779339  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:22.779360  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:22.835541  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:22.828422   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.828970   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.830522   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.831086   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.832606   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:22.828422   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.828970   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.830522   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.831086   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:22.832606   10656 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:22.835550  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:22.835561  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:22.893791  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:22.893816  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:25.423487  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:25.434946  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:25.435008  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:25.461262  109844 cri.go:89] found id: ""
	I1002 20:57:25.461278  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.461286  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:25.461293  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:25.461373  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:25.487938  109844 cri.go:89] found id: ""
	I1002 20:57:25.487954  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.487960  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:25.487965  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:25.488008  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:25.513819  109844 cri.go:89] found id: ""
	I1002 20:57:25.513833  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.513839  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:25.513844  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:25.513887  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:25.540047  109844 cri.go:89] found id: ""
	I1002 20:57:25.540064  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.540073  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:25.540080  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:25.540218  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:25.565240  109844 cri.go:89] found id: ""
	I1002 20:57:25.565256  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.565262  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:25.565267  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:25.565332  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:25.591199  109844 cri.go:89] found id: ""
	I1002 20:57:25.591214  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.591221  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:25.591226  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:25.591271  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:25.617021  109844 cri.go:89] found id: ""
	I1002 20:57:25.617040  109844 logs.go:282] 0 containers: []
	W1002 20:57:25.617047  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:25.617055  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:25.617071  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:25.674861  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:25.668100   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.668693   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670241   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670676   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.672203   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:25.668100   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.668693   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670241   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.670676   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:25.672203   10767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:25.674872  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:25.674887  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:25.735460  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:25.735487  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:25.765055  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:25.765071  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:25.833285  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:25.833307  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:28.348626  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:28.359370  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:28.359432  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:28.384665  109844 cri.go:89] found id: ""
	I1002 20:57:28.384681  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.384688  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:28.384692  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:28.384756  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:28.411127  109844 cri.go:89] found id: ""
	I1002 20:57:28.411142  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.411148  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:28.411153  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:28.411198  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:28.439419  109844 cri.go:89] found id: ""
	I1002 20:57:28.439433  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.439439  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:28.439444  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:28.439491  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:28.465419  109844 cri.go:89] found id: ""
	I1002 20:57:28.465434  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.465441  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:28.465446  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:28.465494  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:28.492080  109844 cri.go:89] found id: ""
	I1002 20:57:28.492098  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.492107  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:28.492114  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:28.492171  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:28.518199  109844 cri.go:89] found id: ""
	I1002 20:57:28.518215  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.518221  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:28.518226  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:28.518290  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:28.545226  109844 cri.go:89] found id: ""
	I1002 20:57:28.545241  109844 logs.go:282] 0 containers: []
	W1002 20:57:28.545248  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:28.545255  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:28.545266  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:28.574035  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:28.574055  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:28.640805  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:28.640827  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:28.655177  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:28.655195  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:28.715784  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:28.707733   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.708329   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.710706   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.711235   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.712816   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:28.707733   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.708329   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.710706   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.711235   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:28.712816   10909 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:28.715802  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:28.715813  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:31.282555  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:31.293415  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:31.293460  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:31.320069  109844 cri.go:89] found id: ""
	I1002 20:57:31.320084  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.320090  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:31.320096  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:31.320141  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:31.347288  109844 cri.go:89] found id: ""
	I1002 20:57:31.347308  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.347315  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:31.347319  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:31.347370  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:31.373910  109844 cri.go:89] found id: ""
	I1002 20:57:31.373926  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.373932  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:31.373936  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:31.373980  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:31.399488  109844 cri.go:89] found id: ""
	I1002 20:57:31.399504  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.399510  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:31.399515  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:31.399579  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:31.425794  109844 cri.go:89] found id: ""
	I1002 20:57:31.425809  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.425815  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:31.425824  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:31.425878  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:31.452232  109844 cri.go:89] found id: ""
	I1002 20:57:31.452247  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.452253  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:31.452258  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:31.452304  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:31.478189  109844 cri.go:89] found id: ""
	I1002 20:57:31.478208  109844 logs.go:282] 0 containers: []
	W1002 20:57:31.478217  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:31.478226  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:31.478239  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:31.535213  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:31.527960   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.528553   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530059   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530507   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.532158   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:31.527960   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.528553   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530059   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.530507   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:31.532158   11016 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:31.535223  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:31.535235  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:31.596390  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:31.596416  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:31.625326  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:31.625347  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:31.695449  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:31.695470  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:34.210847  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:34.221612  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:34.221660  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:34.248100  109844 cri.go:89] found id: ""
	I1002 20:57:34.248118  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.248124  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:34.248129  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:34.248177  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:34.273928  109844 cri.go:89] found id: ""
	I1002 20:57:34.273947  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.273953  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:34.273958  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:34.274004  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:34.300659  109844 cri.go:89] found id: ""
	I1002 20:57:34.300677  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.300684  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:34.300688  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:34.300751  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:34.328932  109844 cri.go:89] found id: ""
	I1002 20:57:34.328950  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.328958  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:34.328964  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:34.329012  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:34.355289  109844 cri.go:89] found id: ""
	I1002 20:57:34.355305  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.355315  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:34.355320  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:34.355371  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:34.381635  109844 cri.go:89] found id: ""
	I1002 20:57:34.381651  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.381658  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:34.381664  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:34.381713  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:34.406539  109844 cri.go:89] found id: ""
	I1002 20:57:34.406558  109844 logs.go:282] 0 containers: []
	W1002 20:57:34.406567  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:34.406575  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:34.406586  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:34.476613  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:34.476637  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:34.491529  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:34.491545  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:34.548604  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:34.541411   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.541857   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543425   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543873   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.545469   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:34.541411   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.541857   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543425   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.543873   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:34.545469   11146 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:34.548616  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:34.548627  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:34.614034  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:34.614057  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:37.146000  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:37.156680  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:37.156731  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:37.183104  109844 cri.go:89] found id: ""
	I1002 20:57:37.183120  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.183126  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:37.183130  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:37.183180  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:37.209542  109844 cri.go:89] found id: ""
	I1002 20:57:37.209561  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.209570  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:37.209593  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:37.209651  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:37.236887  109844 cri.go:89] found id: ""
	I1002 20:57:37.236902  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.236907  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:37.236912  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:37.236955  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:37.263572  109844 cri.go:89] found id: ""
	I1002 20:57:37.263590  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.263600  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:37.263606  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:37.263670  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:37.290064  109844 cri.go:89] found id: ""
	I1002 20:57:37.290081  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.290088  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:37.290092  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:37.290140  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:37.315854  109844 cri.go:89] found id: ""
	I1002 20:57:37.315870  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.315877  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:37.315881  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:37.315928  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:37.341863  109844 cri.go:89] found id: ""
	I1002 20:57:37.341881  109844 logs.go:282] 0 containers: []
	W1002 20:57:37.341888  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:37.341896  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:37.341906  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:37.370994  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:37.371009  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:37.436106  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:37.436137  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:37.451121  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:37.451149  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:37.506868  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:37.499823   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.500382   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.501949   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.502458   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.504014   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:37.499823   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.500382   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.501949   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.502458   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:37.504014   11291 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:37.506882  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:37.506894  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:40.067997  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:40.078961  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:40.079015  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:40.104825  109844 cri.go:89] found id: ""
	I1002 20:57:40.104841  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.104848  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:40.104853  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:40.104901  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:40.131395  109844 cri.go:89] found id: ""
	I1002 20:57:40.131410  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.131417  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:40.131421  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:40.131472  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:40.156879  109844 cri.go:89] found id: ""
	I1002 20:57:40.156894  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.156900  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:40.156904  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:40.156950  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:40.184037  109844 cri.go:89] found id: ""
	I1002 20:57:40.184052  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.184058  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:40.184063  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:40.184109  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:40.209631  109844 cri.go:89] found id: ""
	I1002 20:57:40.209645  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.209652  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:40.209657  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:40.209718  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:40.235959  109844 cri.go:89] found id: ""
	I1002 20:57:40.235974  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.235981  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:40.235985  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:40.236031  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:40.263268  109844 cri.go:89] found id: ""
	I1002 20:57:40.263295  109844 logs.go:282] 0 containers: []
	W1002 20:57:40.263303  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:40.263312  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:40.263329  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:40.277655  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:40.277674  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:40.333759  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:40.326797   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.327375   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.328853   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.329279   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.330917   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:40.326797   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.327375   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.328853   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.329279   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:40.330917   11401 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:40.333771  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:40.333782  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:40.398547  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:40.398573  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:40.429055  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:40.429075  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:43.000960  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:43.011533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:43.011594  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:43.038639  109844 cri.go:89] found id: ""
	I1002 20:57:43.038658  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.038664  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:43.038670  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:43.038718  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:43.064610  109844 cri.go:89] found id: ""
	I1002 20:57:43.064629  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.064638  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:43.064645  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:43.064692  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:43.092797  109844 cri.go:89] found id: ""
	I1002 20:57:43.092814  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.092829  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:43.092836  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:43.092905  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:43.117372  109844 cri.go:89] found id: ""
	I1002 20:57:43.117390  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.117398  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:43.117405  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:43.117455  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:43.143883  109844 cri.go:89] found id: ""
	I1002 20:57:43.143898  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.143903  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:43.143908  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:43.143954  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:43.168684  109844 cri.go:89] found id: ""
	I1002 20:57:43.168703  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.168711  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:43.168719  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:43.168794  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:43.194200  109844 cri.go:89] found id: ""
	I1002 20:57:43.194219  109844 logs.go:282] 0 containers: []
	W1002 20:57:43.194226  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:43.194233  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:43.194243  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:43.224696  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:43.224716  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:43.292485  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:43.292511  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:43.307408  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:43.307426  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:43.365123  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:43.357900   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.358436   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360055   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360531   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.362200   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:43.357900   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.358436   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360055   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.360531   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:43.362200   11553 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:43.365138  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:43.365151  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:45.930176  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:45.940786  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:45.940834  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:45.966149  109844 cri.go:89] found id: ""
	I1002 20:57:45.966163  109844 logs.go:282] 0 containers: []
	W1002 20:57:45.966170  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:45.966174  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:45.966229  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:45.991076  109844 cri.go:89] found id: ""
	I1002 20:57:45.991091  109844 logs.go:282] 0 containers: []
	W1002 20:57:45.991098  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:45.991103  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:45.991160  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:46.016684  109844 cri.go:89] found id: ""
	I1002 20:57:46.016699  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.016707  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:46.016712  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:46.016783  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:46.044048  109844 cri.go:89] found id: ""
	I1002 20:57:46.044066  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.044075  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:46.044080  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:46.044126  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:46.072438  109844 cri.go:89] found id: ""
	I1002 20:57:46.072458  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.072463  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:46.072468  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:46.072513  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:46.098408  109844 cri.go:89] found id: ""
	I1002 20:57:46.098427  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.098435  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:46.098440  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:46.098494  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:46.125237  109844 cri.go:89] found id: ""
	I1002 20:57:46.125253  109844 logs.go:282] 0 containers: []
	W1002 20:57:46.125260  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:46.125267  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:46.125279  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:46.181454  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:46.174705   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.175269   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.176884   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.177274   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.178794   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:46.174705   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.175269   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.176884   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.177274   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:46.178794   11648 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:46.181465  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:46.181477  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:46.245377  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:46.245400  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:46.273829  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:46.273850  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:46.343515  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:46.343537  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:48.859249  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:48.870377  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:48.870433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:48.897669  109844 cri.go:89] found id: ""
	I1002 20:57:48.897687  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.897694  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:48.897699  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:48.897762  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:48.925008  109844 cri.go:89] found id: ""
	I1002 20:57:48.925023  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.925030  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:48.925036  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:48.925083  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:48.951643  109844 cri.go:89] found id: ""
	I1002 20:57:48.951657  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.951664  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:48.951668  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:48.951714  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:48.979002  109844 cri.go:89] found id: ""
	I1002 20:57:48.979020  109844 logs.go:282] 0 containers: []
	W1002 20:57:48.979029  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:48.979036  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:48.979093  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:49.004625  109844 cri.go:89] found id: ""
	I1002 20:57:49.004641  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.004648  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:49.004652  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:49.004701  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:49.031772  109844 cri.go:89] found id: ""
	I1002 20:57:49.031788  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.031793  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:49.031805  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:49.031862  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:49.057980  109844 cri.go:89] found id: ""
	I1002 20:57:49.057996  109844 logs.go:282] 0 containers: []
	W1002 20:57:49.058004  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:49.058013  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:49.058023  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:49.124248  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:49.124270  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:49.138512  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:49.138533  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:49.195138  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:49.187056   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.188681   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.189138   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.190686   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.191107   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:49.187056   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.188681   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.189138   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.190686   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:49.191107   11774 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:49.195151  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:49.195173  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:49.258973  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:49.258997  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:51.791466  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:51.802977  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:51.803035  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:51.828498  109844 cri.go:89] found id: ""
	I1002 20:57:51.828514  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.828521  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:51.828526  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:51.828588  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:51.854342  109844 cri.go:89] found id: ""
	I1002 20:57:51.854360  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.854371  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:51.854378  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:51.854456  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:51.880507  109844 cri.go:89] found id: ""
	I1002 20:57:51.880524  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.880532  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:51.880537  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:51.880595  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:51.905868  109844 cri.go:89] found id: ""
	I1002 20:57:51.905885  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.905899  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:51.905906  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:51.905958  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:51.931501  109844 cri.go:89] found id: ""
	I1002 20:57:51.931520  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.931527  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:51.931533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:51.931584  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:51.959507  109844 cri.go:89] found id: ""
	I1002 20:57:51.959531  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.959537  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:51.959543  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:51.959597  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:51.986060  109844 cri.go:89] found id: ""
	I1002 20:57:51.986075  109844 logs.go:282] 0 containers: []
	W1002 20:57:51.986082  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:51.986090  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:51.986102  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:52.001242  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:52.001265  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:52.058943  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:52.051510   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.052186   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.053757   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.054153   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.055841   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:52.051510   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.052186   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.053757   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.054153   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:52.055841   11903 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:52.058955  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:52.058966  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:52.124165  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:52.124189  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:52.153884  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:52.153905  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:54.722906  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:54.734175  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:54.734232  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:54.759813  109844 cri.go:89] found id: ""
	I1002 20:57:54.759827  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.759834  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:54.759839  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:54.759886  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:54.786211  109844 cri.go:89] found id: ""
	I1002 20:57:54.786228  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.786234  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:54.786238  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:54.786296  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:54.812209  109844 cri.go:89] found id: ""
	I1002 20:57:54.812224  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.812231  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:54.812235  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:54.812279  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:54.838338  109844 cri.go:89] found id: ""
	I1002 20:57:54.838354  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.838359  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:54.838364  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:54.838409  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:54.864235  109844 cri.go:89] found id: ""
	I1002 20:57:54.864250  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.864257  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:54.864262  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:54.864313  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:54.889322  109844 cri.go:89] found id: ""
	I1002 20:57:54.889338  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.889345  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:54.889350  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:54.889408  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:54.914375  109844 cri.go:89] found id: ""
	I1002 20:57:54.914389  109844 logs.go:282] 0 containers: []
	W1002 20:57:54.914396  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:54.914403  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:54.914413  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:54.982673  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:54.982695  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:54.997624  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:54.997643  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:55.054906  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:55.047912   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.048515   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050118   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050555   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.052232   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:55.047912   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.048515   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050118   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.050555   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:55.052232   12029 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:55.054918  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:55.054930  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:55.114767  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:55.114791  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:57:57.644999  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:57:57.656449  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:57:57.656504  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:57:57.681519  109844 cri.go:89] found id: ""
	I1002 20:57:57.681536  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.681547  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:57:57.681562  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:57:57.681613  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:57:57.707282  109844 cri.go:89] found id: ""
	I1002 20:57:57.707299  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.707306  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:57:57.707311  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:57:57.707368  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:57:57.733730  109844 cri.go:89] found id: ""
	I1002 20:57:57.733764  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.733773  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:57:57.733779  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:57:57.733829  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:57:57.759892  109844 cri.go:89] found id: ""
	I1002 20:57:57.759910  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.759919  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:57:57.759930  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:57:57.759977  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:57:57.786461  109844 cri.go:89] found id: ""
	I1002 20:57:57.786480  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.786488  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:57:57.786494  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:57:57.786554  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:57:57.811498  109844 cri.go:89] found id: ""
	I1002 20:57:57.811513  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.811520  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:57:57.811525  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:57:57.811584  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:57:57.838643  109844 cri.go:89] found id: ""
	I1002 20:57:57.838658  109844 logs.go:282] 0 containers: []
	W1002 20:57:57.838664  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:57:57.838672  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:57:57.838683  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:57:57.903092  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:57:57.903112  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:57:57.917294  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:57:57.917313  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:57:57.973186  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:57:57.965977   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.966517   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968135   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968620   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.970155   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:57:57.965977   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.966517   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968135   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.968620   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:57:57.970155   12154 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:57:57.973196  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:57:57.973206  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:57:58.037591  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:57:58.037615  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:00.568697  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:00.579453  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:00.579509  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:00.605205  109844 cri.go:89] found id: ""
	I1002 20:58:00.605221  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.605228  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:00.605236  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:00.605281  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:00.630667  109844 cri.go:89] found id: ""
	I1002 20:58:00.630683  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.630690  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:00.630695  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:00.630779  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:00.656328  109844 cri.go:89] found id: ""
	I1002 20:58:00.656343  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.656349  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:00.656356  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:00.656404  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:00.687352  109844 cri.go:89] found id: ""
	I1002 20:58:00.687372  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.687380  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:00.687387  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:00.687450  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:00.715971  109844 cri.go:89] found id: ""
	I1002 20:58:00.715989  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.715996  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:00.716001  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:00.716051  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:00.743250  109844 cri.go:89] found id: ""
	I1002 20:58:00.743267  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.743274  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:00.743279  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:00.743337  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:00.768377  109844 cri.go:89] found id: ""
	I1002 20:58:00.768394  109844 logs.go:282] 0 containers: []
	W1002 20:58:00.768402  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:00.768410  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:00.768421  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:00.836309  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:00.836330  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:00.851074  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:00.851091  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:00.909067  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:00.901998   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.902472   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904121   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904638   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.906303   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:00.901998   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.902472   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904121   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.904638   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:00.906303   12279 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:00.909078  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:00.909089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:00.967974  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:00.967996  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:03.498950  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:03.509660  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:03.509721  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:03.535662  109844 cri.go:89] found id: ""
	I1002 20:58:03.535677  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.535684  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:03.535689  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:03.535733  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:03.561250  109844 cri.go:89] found id: ""
	I1002 20:58:03.561265  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.561272  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:03.561277  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:03.561321  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:03.587048  109844 cri.go:89] found id: ""
	I1002 20:58:03.587067  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.587076  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:03.587083  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:03.587147  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:03.613674  109844 cri.go:89] found id: ""
	I1002 20:58:03.613690  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.613697  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:03.613702  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:03.613769  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:03.640328  109844 cri.go:89] found id: ""
	I1002 20:58:03.640347  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.640355  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:03.640361  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:03.640422  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:03.666291  109844 cri.go:89] found id: ""
	I1002 20:58:03.666312  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.666319  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:03.666331  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:03.666382  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:03.691967  109844 cri.go:89] found id: ""
	I1002 20:58:03.691985  109844 logs.go:282] 0 containers: []
	W1002 20:58:03.691992  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:03.692006  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:03.692016  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:03.759409  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:03.759439  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:03.774258  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:03.774279  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:03.832338  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:03.825592   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.826120   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.827704   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.828142   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.829691   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:03.825592   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.826120   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.827704   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.828142   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:03.829691   12391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:03.832353  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:03.832368  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:03.893996  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:03.894020  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:06.425787  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:06.436589  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:06.436637  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:06.462848  109844 cri.go:89] found id: ""
	I1002 20:58:06.462863  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.462870  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:06.462876  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:06.462923  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:06.488755  109844 cri.go:89] found id: ""
	I1002 20:58:06.488775  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.488784  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:06.488790  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:06.488840  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:06.514901  109844 cri.go:89] found id: ""
	I1002 20:58:06.514916  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.514922  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:06.514927  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:06.514970  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:06.541198  109844 cri.go:89] found id: ""
	I1002 20:58:06.541216  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.541222  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:06.541227  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:06.541274  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:06.566811  109844 cri.go:89] found id: ""
	I1002 20:58:06.566829  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.566835  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:06.566839  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:06.566889  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:06.592998  109844 cri.go:89] found id: ""
	I1002 20:58:06.593016  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.593025  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:06.593032  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:06.593082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:06.619126  109844 cri.go:89] found id: ""
	I1002 20:58:06.619142  109844 logs.go:282] 0 containers: []
	W1002 20:58:06.619149  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:06.619156  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:06.619169  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:06.688927  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:06.688949  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:06.703470  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:06.703489  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:06.759531  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:06.752604   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.753172   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.754947   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.755395   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.756902   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:06.752604   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.753172   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.754947   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.755395   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:06.756902   12512 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:06.759547  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:06.759558  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:06.821429  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:06.821453  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:09.350584  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:09.361407  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:09.361457  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:09.387670  109844 cri.go:89] found id: ""
	I1002 20:58:09.387686  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.387692  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:09.387697  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:09.387769  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:09.414282  109844 cri.go:89] found id: ""
	I1002 20:58:09.414297  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.414303  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:09.414308  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:09.414359  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:09.439986  109844 cri.go:89] found id: ""
	I1002 20:58:09.440004  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.440013  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:09.440021  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:09.440078  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:09.465260  109844 cri.go:89] found id: ""
	I1002 20:58:09.465274  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.465279  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:09.465284  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:09.465342  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:09.490459  109844 cri.go:89] found id: ""
	I1002 20:58:09.490475  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.490485  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:09.490492  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:09.490542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:09.517572  109844 cri.go:89] found id: ""
	I1002 20:58:09.517589  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.517597  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:09.517604  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:09.517657  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:09.543171  109844 cri.go:89] found id: ""
	I1002 20:58:09.543190  109844 logs.go:282] 0 containers: []
	W1002 20:58:09.543200  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:09.543210  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:09.543224  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:09.610811  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:09.610836  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:09.625732  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:09.625765  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:09.684133  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:09.677059   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.677657   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679235   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679641   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.681326   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:09.677059   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.677657   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679235   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.679641   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:09.681326   12636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:09.684159  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:09.684172  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:09.750121  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:09.750146  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:12.281914  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:12.292614  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:12.292681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:12.319213  109844 cri.go:89] found id: ""
	I1002 20:58:12.319229  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.319236  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:12.319241  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:12.319307  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:12.346475  109844 cri.go:89] found id: ""
	I1002 20:58:12.346491  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.346497  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:12.346506  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:12.346558  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:12.373396  109844 cri.go:89] found id: ""
	I1002 20:58:12.373412  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.373418  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:12.373422  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:12.373472  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:12.399960  109844 cri.go:89] found id: ""
	I1002 20:58:12.399975  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.399984  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:12.399990  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:12.400046  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:12.426115  109844 cri.go:89] found id: ""
	I1002 20:58:12.426134  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.426143  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:12.426148  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:12.426199  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:12.453989  109844 cri.go:89] found id: ""
	I1002 20:58:12.454005  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.454012  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:12.454017  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:12.454082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:12.480468  109844 cri.go:89] found id: ""
	I1002 20:58:12.480482  109844 logs.go:282] 0 containers: []
	W1002 20:58:12.480489  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:12.480497  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:12.480506  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:12.546963  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:12.546987  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:12.561865  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:12.561884  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:12.618630  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:12.611604   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.612174   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.613811   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.614220   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.615797   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:12.611604   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.612174   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.613811   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.614220   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:12.615797   12754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:12.618644  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:12.618659  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:12.679779  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:12.679800  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:15.211438  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:15.222920  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:15.222984  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:15.249459  109844 cri.go:89] found id: ""
	I1002 20:58:15.249477  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.249486  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:15.249493  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:15.249563  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:15.275298  109844 cri.go:89] found id: ""
	I1002 20:58:15.275317  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.275324  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:15.275329  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:15.275376  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:15.301700  109844 cri.go:89] found id: ""
	I1002 20:58:15.301716  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.301722  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:15.301730  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:15.301798  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:15.329414  109844 cri.go:89] found id: ""
	I1002 20:58:15.329435  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.329442  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:15.329449  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:15.329509  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:15.355068  109844 cri.go:89] found id: ""
	I1002 20:58:15.355085  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.355093  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:15.355098  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:15.355148  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:15.380359  109844 cri.go:89] found id: ""
	I1002 20:58:15.380376  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.380383  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:15.380388  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:15.380447  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:15.407083  109844 cri.go:89] found id: ""
	I1002 20:58:15.407100  109844 logs.go:282] 0 containers: []
	W1002 20:58:15.407107  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:15.407114  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:15.407125  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:15.475929  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:15.475952  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:15.490571  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:15.490597  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:15.548455  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:15.541509   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.542074   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.543830   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.544263   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.545369   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:15.541509   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.542074   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.543830   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.544263   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:15.545369   12875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:15.548470  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:15.548492  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:15.612985  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:15.613011  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:18.144173  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:18.154768  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:18.154839  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:18.181108  109844 cri.go:89] found id: ""
	I1002 20:58:18.181127  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.181135  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:18.181142  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:18.181211  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:18.207541  109844 cri.go:89] found id: ""
	I1002 20:58:18.207557  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.207564  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:18.207568  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:18.207617  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:18.234607  109844 cri.go:89] found id: ""
	I1002 20:58:18.234623  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.234630  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:18.234635  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:18.234682  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:18.262449  109844 cri.go:89] found id: ""
	I1002 20:58:18.262465  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.262471  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:18.262476  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:18.262525  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:18.288587  109844 cri.go:89] found id: ""
	I1002 20:58:18.288604  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.288611  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:18.288615  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:18.288671  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:18.315591  109844 cri.go:89] found id: ""
	I1002 20:58:18.315608  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.315616  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:18.315623  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:18.315686  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:18.341916  109844 cri.go:89] found id: ""
	I1002 20:58:18.341934  109844 logs.go:282] 0 containers: []
	W1002 20:58:18.341943  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:18.341953  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:18.341967  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:18.409370  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:18.409397  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:18.423940  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:18.423957  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:18.481317  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:18.474299   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.474857   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476482   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476953   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.478581   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:18.474299   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.474857   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476482   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.476953   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:18.478581   13007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:18.481328  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:18.481341  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:18.544851  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:18.544915  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:21.076714  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:21.087984  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:21.088035  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:21.114553  109844 cri.go:89] found id: ""
	I1002 20:58:21.114567  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.114574  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:21.114579  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:21.114627  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:21.140623  109844 cri.go:89] found id: ""
	I1002 20:58:21.140640  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.140647  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:21.140652  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:21.140709  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:21.167287  109844 cri.go:89] found id: ""
	I1002 20:58:21.167303  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.167310  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:21.167314  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:21.167366  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:21.192955  109844 cri.go:89] found id: ""
	I1002 20:58:21.192970  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.192976  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:21.192981  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:21.193026  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:21.218443  109844 cri.go:89] found id: ""
	I1002 20:58:21.218461  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.218470  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:21.218477  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:21.218543  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:21.245610  109844 cri.go:89] found id: ""
	I1002 20:58:21.245629  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.245636  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:21.245641  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:21.245705  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:21.274044  109844 cri.go:89] found id: ""
	I1002 20:58:21.274062  109844 logs.go:282] 0 containers: []
	W1002 20:58:21.274071  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:21.274082  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:21.274094  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:21.344823  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:21.344846  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:21.359586  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:21.359607  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:21.415715  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:21.408650   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.409207   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.410856   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.411238   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.412941   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:21.408650   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.409207   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.410856   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.411238   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:21.412941   13125 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:21.415727  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:21.415761  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:21.481719  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:21.481748  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:24.012099  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:24.023176  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:24.023230  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:24.048833  109844 cri.go:89] found id: ""
	I1002 20:58:24.048848  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.048854  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:24.048859  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:24.048910  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:24.075718  109844 cri.go:89] found id: ""
	I1002 20:58:24.075734  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.075760  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:24.075767  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:24.075820  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:24.102393  109844 cri.go:89] found id: ""
	I1002 20:58:24.102408  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.102415  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:24.102420  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:24.102470  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:24.128211  109844 cri.go:89] found id: ""
	I1002 20:58:24.128226  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.128233  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:24.128237  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:24.128295  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:24.154298  109844 cri.go:89] found id: ""
	I1002 20:58:24.154317  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.154337  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:24.154342  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:24.154400  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:24.180259  109844 cri.go:89] found id: ""
	I1002 20:58:24.180279  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.180289  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:24.180294  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:24.180343  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:24.206017  109844 cri.go:89] found id: ""
	I1002 20:58:24.206032  109844 logs.go:282] 0 containers: []
	W1002 20:58:24.206038  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:24.206045  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:24.206057  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:24.262477  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:24.255581   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.256099   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.257667   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.258105   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.259636   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:24.255581   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.256099   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.257667   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.258105   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:24.259636   13243 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:24.262487  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:24.262499  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:24.326558  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:24.326583  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:24.357911  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:24.357927  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:24.425144  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:24.425170  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:26.942340  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:26.953162  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:26.953210  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:26.977629  109844 cri.go:89] found id: ""
	I1002 20:58:26.977645  109844 logs.go:282] 0 containers: []
	W1002 20:58:26.977652  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:26.977656  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:26.977701  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:27.003794  109844 cri.go:89] found id: ""
	I1002 20:58:27.003810  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.003817  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:27.003821  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:27.003871  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:27.031644  109844 cri.go:89] found id: ""
	I1002 20:58:27.031662  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.031669  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:27.031673  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:27.031723  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:27.058490  109844 cri.go:89] found id: ""
	I1002 20:58:27.058522  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.058529  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:27.058533  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:27.058580  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:27.083451  109844 cri.go:89] found id: ""
	I1002 20:58:27.083468  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.083475  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:27.083480  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:27.083536  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:27.108449  109844 cri.go:89] found id: ""
	I1002 20:58:27.108467  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.108475  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:27.108481  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:27.108542  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:27.135415  109844 cri.go:89] found id: ""
	I1002 20:58:27.135433  109844 logs.go:282] 0 containers: []
	W1002 20:58:27.135441  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:27.135451  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:27.135467  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:27.206016  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:27.206039  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:27.220873  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:27.220894  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:27.276309  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:27.269235   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.269791   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271364   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271799   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.273317   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:27.269235   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.269791   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271364   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.271799   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:27.273317   13367 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:27.276320  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:27.276335  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:27.341398  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:27.341421  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:29.872391  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:29.883459  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:29.883531  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:29.909713  109844 cri.go:89] found id: ""
	I1002 20:58:29.909729  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.909748  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:29.909755  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:29.909806  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:29.934338  109844 cri.go:89] found id: ""
	I1002 20:58:29.934354  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.934360  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:29.934365  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:29.934409  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:29.961900  109844 cri.go:89] found id: ""
	I1002 20:58:29.961917  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.961926  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:29.961932  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:29.961998  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:29.988238  109844 cri.go:89] found id: ""
	I1002 20:58:29.988253  109844 logs.go:282] 0 containers: []
	W1002 20:58:29.988260  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:29.988265  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:29.988328  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:30.013598  109844 cri.go:89] found id: ""
	I1002 20:58:30.013613  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.013619  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:30.013624  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:30.013674  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:30.040799  109844 cri.go:89] found id: ""
	I1002 20:58:30.040817  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.040824  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:30.040829  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:30.040875  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:30.067159  109844 cri.go:89] found id: ""
	I1002 20:58:30.067174  109844 logs.go:282] 0 containers: []
	W1002 20:58:30.067180  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:30.067187  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:30.067199  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:30.081264  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:30.081282  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:30.136411  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:30.129335   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.129861   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131445   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131865   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.133370   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:30.129335   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.129861   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131445   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.131865   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:30.133370   13495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:30.136422  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:30.136436  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:30.198567  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:30.198599  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:30.226466  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:30.226488  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:32.794266  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:32.805593  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:32.805643  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:32.832000  109844 cri.go:89] found id: ""
	I1002 20:58:32.832015  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.832022  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:32.832027  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:32.832072  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:32.858662  109844 cri.go:89] found id: ""
	I1002 20:58:32.858680  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.858687  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:32.858691  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:32.858758  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:32.884652  109844 cri.go:89] found id: ""
	I1002 20:58:32.884671  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.884679  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:32.884686  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:32.884767  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:32.911548  109844 cri.go:89] found id: ""
	I1002 20:58:32.911571  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.911578  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:32.911583  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:32.911631  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:32.939319  109844 cri.go:89] found id: ""
	I1002 20:58:32.939335  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.939343  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:32.939347  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:32.939396  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:32.965654  109844 cri.go:89] found id: ""
	I1002 20:58:32.965670  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.965677  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:32.965681  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:32.965750  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:32.991821  109844 cri.go:89] found id: ""
	I1002 20:58:32.991837  109844 logs.go:282] 0 containers: []
	W1002 20:58:32.991843  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:32.991851  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:32.991861  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:33.059096  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:33.059118  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:33.074520  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:33.074536  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:33.130853  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:33.124022   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.124509   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126111   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126586   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.128121   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:33.124022   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.124509   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126111   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.126586   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:33.128121   13625 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:33.130867  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:33.130881  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:33.196122  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:33.196146  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:35.728638  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:35.739628  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:35.739676  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:35.764726  109844 cri.go:89] found id: ""
	I1002 20:58:35.764760  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.764771  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:35.764777  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:35.764823  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:35.791011  109844 cri.go:89] found id: ""
	I1002 20:58:35.791026  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.791032  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:35.791037  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:35.791082  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:35.817209  109844 cri.go:89] found id: ""
	I1002 20:58:35.817225  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.817231  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:35.817236  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:35.817281  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:35.842125  109844 cri.go:89] found id: ""
	I1002 20:58:35.842139  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.842145  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:35.842154  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:35.842200  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:35.867608  109844 cri.go:89] found id: ""
	I1002 20:58:35.867625  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.867631  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:35.867636  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:35.867681  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:35.893798  109844 cri.go:89] found id: ""
	I1002 20:58:35.893813  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.893819  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:35.893824  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:35.893881  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:35.920822  109844 cri.go:89] found id: ""
	I1002 20:58:35.920837  109844 logs.go:282] 0 containers: []
	W1002 20:58:35.920843  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:35.920851  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:35.920862  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:35.982786  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:35.982809  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:36.012445  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:36.012461  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:36.079729  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:36.079764  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:36.094119  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:36.094139  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:36.149838  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:36.142929   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.143480   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145076   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145533   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.147087   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:36.142929   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.143480   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145076   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.145533   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:36.147087   13760 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:38.650569  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:38.661345  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:38.661406  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:38.687690  109844 cri.go:89] found id: ""
	I1002 20:58:38.687709  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.687719  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:38.687729  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:38.687800  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:38.712812  109844 cri.go:89] found id: ""
	I1002 20:58:38.712830  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.712840  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:38.712846  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:38.712897  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:38.738922  109844 cri.go:89] found id: ""
	I1002 20:58:38.738938  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.738945  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:38.738951  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:38.739014  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:38.766166  109844 cri.go:89] found id: ""
	I1002 20:58:38.766184  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.766191  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:38.766201  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:38.766259  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:38.793662  109844 cri.go:89] found id: ""
	I1002 20:58:38.793679  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.793687  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:38.793692  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:38.793758  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:38.820204  109844 cri.go:89] found id: ""
	I1002 20:58:38.820225  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.820233  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:38.820242  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:38.820301  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:38.846100  109844 cri.go:89] found id: ""
	I1002 20:58:38.846116  109844 logs.go:282] 0 containers: []
	W1002 20:58:38.846122  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:38.846130  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:38.846143  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:38.912234  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:38.912257  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:38.926642  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:38.926661  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:38.983128  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:38.975680   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.976323   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.977925   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.978355   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.979926   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:38.975680   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.976323   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.977925   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.978355   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:38.979926   13865 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:38.983140  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:38.983151  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:39.042170  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:39.042192  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:41.573431  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:41.584132  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:41.584179  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:41.610465  109844 cri.go:89] found id: ""
	I1002 20:58:41.610490  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.610500  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:41.610507  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:41.610571  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:41.636463  109844 cri.go:89] found id: ""
	I1002 20:58:41.636481  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.636488  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:41.636493  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:41.636544  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:41.663306  109844 cri.go:89] found id: ""
	I1002 20:58:41.663324  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.663334  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:41.663340  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:41.663389  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:41.689945  109844 cri.go:89] found id: ""
	I1002 20:58:41.689963  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.689970  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:41.689975  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:41.690030  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:41.716483  109844 cri.go:89] found id: ""
	I1002 20:58:41.716498  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.716511  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:41.716515  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:41.716563  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:41.741653  109844 cri.go:89] found id: ""
	I1002 20:58:41.741670  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.741677  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:41.741682  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:41.741728  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:41.768401  109844 cri.go:89] found id: ""
	I1002 20:58:41.768418  109844 logs.go:282] 0 containers: []
	W1002 20:58:41.768425  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:41.768433  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:41.768444  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:41.825098  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:41.818285   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.818820   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820386   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820857   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.822413   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:41.818285   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.818820   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820386   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.820857   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:41.822413   13980 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:41.825108  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:41.825120  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:41.885569  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:41.885592  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:41.914823  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:41.914840  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:41.982285  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:41.982309  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:44.498020  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:44.508926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:44.508975  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:44.534766  109844 cri.go:89] found id: ""
	I1002 20:58:44.534783  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.534791  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:44.534797  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:44.534849  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:44.561400  109844 cri.go:89] found id: ""
	I1002 20:58:44.561418  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.561425  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:44.561429  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:44.561481  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:44.587621  109844 cri.go:89] found id: ""
	I1002 20:58:44.587638  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.587644  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:44.587649  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:44.587696  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:44.612688  109844 cri.go:89] found id: ""
	I1002 20:58:44.612703  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.612709  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:44.612717  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:44.612784  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:44.639713  109844 cri.go:89] found id: ""
	I1002 20:58:44.639728  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.639755  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:44.639763  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:44.639821  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:44.666252  109844 cri.go:89] found id: ""
	I1002 20:58:44.666271  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.666278  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:44.666283  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:44.666330  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:44.692295  109844 cri.go:89] found id: ""
	I1002 20:58:44.692311  109844 logs.go:282] 0 containers: []
	W1002 20:58:44.692318  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:44.692326  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:44.692336  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:44.763438  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:44.763462  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:44.777919  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:44.777938  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:44.833114  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:44.826286   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.826821   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828377   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828833   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.830344   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:44.826286   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.826821   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828377   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.828833   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:44.830344   14111 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:44.833126  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:44.833138  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:44.893410  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:44.893436  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:47.425929  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:47.437727  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 20:58:47.437800  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 20:58:47.465106  109844 cri.go:89] found id: ""
	I1002 20:58:47.465125  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.465135  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 20:58:47.465141  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 20:58:47.465202  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 20:58:47.492450  109844 cri.go:89] found id: ""
	I1002 20:58:47.492469  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.492477  109844 logs.go:284] No container was found matching "etcd"
	I1002 20:58:47.492487  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 20:58:47.492548  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 20:58:47.518249  109844 cri.go:89] found id: ""
	I1002 20:58:47.518266  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.518273  109844 logs.go:284] No container was found matching "coredns"
	I1002 20:58:47.518280  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 20:58:47.518329  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 20:58:47.546009  109844 cri.go:89] found id: ""
	I1002 20:58:47.546026  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.546035  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 20:58:47.546040  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 20:58:47.546095  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 20:58:47.571969  109844 cri.go:89] found id: ""
	I1002 20:58:47.571984  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.571991  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 20:58:47.571995  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 20:58:47.572044  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 20:58:47.598332  109844 cri.go:89] found id: ""
	I1002 20:58:47.598352  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.598362  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 20:58:47.598371  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 20:58:47.598433  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 20:58:47.624909  109844 cri.go:89] found id: ""
	I1002 20:58:47.624923  109844 logs.go:282] 0 containers: []
	W1002 20:58:47.624932  109844 logs.go:284] No container was found matching "kindnet"
	I1002 20:58:47.624942  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 20:58:47.624955  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 20:58:47.682066  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 20:58:47.675019   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.675538   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677178   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677660   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.679133   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 20:58:47.675019   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.675538   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677178   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.677660   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 20:58:47.679133   14230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 20:58:47.682078  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 20:58:47.682089  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 20:58:47.742340  109844 logs.go:123] Gathering logs for container status ...
	I1002 20:58:47.742363  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 20:58:47.772411  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 20:58:47.772428  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 20:58:47.841816  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 20:58:47.841839  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 20:58:50.357907  109844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 20:58:50.368776  109844 kubeadm.go:601] duration metric: took 4m2.902167912s to restartPrimaryControlPlane
	W1002 20:58:50.368863  109844 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1002 20:58:50.368929  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 20:58:50.818759  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 20:58:50.831475  109844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 20:58:50.839597  109844 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 20:58:50.839643  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 20:58:50.847290  109844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 20:58:50.847300  109844 kubeadm.go:157] found existing configuration files:
	
	I1002 20:58:50.847341  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 20:58:50.854889  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 20:58:50.854928  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 20:58:50.862239  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 20:58:50.869705  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 20:58:50.869763  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 20:58:50.877993  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 20:58:50.885836  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 20:58:50.885887  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 20:58:50.893993  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 20:58:50.902316  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 20:58:50.902371  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 20:58:50.910549  109844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 20:58:50.946945  109844 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 20:58:50.946991  109844 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 20:58:50.966485  109844 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 20:58:50.966578  109844 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 20:58:50.966620  109844 kubeadm.go:318] OS: Linux
	I1002 20:58:50.966677  109844 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 20:58:50.966753  109844 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 20:58:50.966809  109844 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 20:58:50.966867  109844 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 20:58:50.966933  109844 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 20:58:50.966988  109844 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 20:58:50.967043  109844 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 20:58:50.967090  109844 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 20:58:51.025471  109844 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 20:58:51.025621  109844 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 20:58:51.025764  109844 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 20:58:51.032580  109844 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 20:58:51.036477  109844 out.go:252]   - Generating certificates and keys ...
	I1002 20:58:51.036579  109844 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 20:58:51.036655  109844 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 20:58:51.036755  109844 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 20:58:51.036828  109844 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 20:58:51.036907  109844 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 20:58:51.036961  109844 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 20:58:51.037039  109844 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 20:58:51.037113  109844 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 20:58:51.037183  109844 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 20:58:51.037249  109844 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 20:58:51.037279  109844 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 20:58:51.037325  109844 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 20:58:51.187682  109844 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 20:58:51.260672  109844 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 20:58:51.923940  109844 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 20:58:51.962992  109844 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 20:58:52.022920  109844 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 20:58:52.023298  109844 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 20:58:52.025586  109844 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 20:58:52.027495  109844 out.go:252]   - Booting up control plane ...
	I1002 20:58:52.027608  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 20:58:52.027713  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 20:58:52.027804  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 20:58:52.042406  109844 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 20:58:52.042511  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 20:58:52.049022  109844 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 20:58:52.049337  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 20:58:52.049378  109844 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 20:58:52.155568  109844 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 20:58:52.155766  109844 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 20:58:53.156432  109844 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000945383s
	I1002 20:58:53.159662  109844 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 20:58:53.159797  109844 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 20:58:53.159937  109844 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 20:58:53.160043  109844 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:02:53.160214  109844 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000318497s
	I1002 21:02:53.160391  109844 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00035696s
	I1002 21:02:53.160519  109844 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000784779s
	I1002 21:02:53.160527  109844 kubeadm.go:318] 
	I1002 21:02:53.160620  109844 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:02:53.160688  109844 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:02:53.160785  109844 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:02:53.160862  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:02:53.160927  109844 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:02:53.161001  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:02:53.161004  109844 kubeadm.go:318] 
	I1002 21:02:53.164399  109844 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:02:53.164524  109844 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:02:53.165091  109844 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 21:02:53.165168  109844 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:02:53.165349  109844 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.000945383s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000318497s
	[control-plane-check] kube-apiserver is not healthy after 4m0.00035696s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000784779s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:02:53.165441  109844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:02:53.609874  109844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:02:53.623007  109844 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:02:53.623061  109844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:02:53.631223  109844 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:02:53.631235  109844 kubeadm.go:157] found existing configuration files:
	
	I1002 21:02:53.631283  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1002 21:02:53.639093  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:02:53.639137  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:02:53.647228  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1002 21:02:53.655566  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:02:53.655610  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:02:53.663430  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1002 21:02:53.671338  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:02:53.671390  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:02:53.679032  109844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1002 21:02:53.686944  109844 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:02:53.686993  109844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:02:53.694170  109844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:02:53.730792  109844 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:02:53.730837  109844 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:02:53.752207  109844 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:02:53.752260  109844 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:02:53.752295  109844 kubeadm.go:318] OS: Linux
	I1002 21:02:53.752337  109844 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:02:53.752403  109844 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:02:53.752440  109844 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:02:53.752485  109844 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:02:53.752585  109844 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:02:53.752641  109844 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:02:53.752685  109844 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:02:53.752720  109844 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:02:53.811160  109844 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:02:53.811301  109844 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:02:53.811426  109844 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:02:53.817686  109844 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:02:53.822264  109844 out.go:252]   - Generating certificates and keys ...
	I1002 21:02:53.822366  109844 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:02:53.822429  109844 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:02:53.822500  109844 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:02:53.822558  109844 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:02:53.822649  109844 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:02:53.822721  109844 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:02:53.822797  109844 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:02:53.822883  109844 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:02:53.822984  109844 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:02:53.823080  109844 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:02:53.823129  109844 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:02:53.823200  109844 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:02:54.089650  109844 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:02:54.165018  109844 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:02:54.351562  109844 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:02:54.606636  109844 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:02:54.799514  109844 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:02:54.799929  109844 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:02:54.802220  109844 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:02:54.804402  109844 out.go:252]   - Booting up control plane ...
	I1002 21:02:54.804516  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:02:54.804616  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:02:54.804724  109844 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:02:54.818368  109844 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:02:54.818509  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:02:54.825531  109844 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:02:54.826683  109844 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:02:54.826734  109844 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:02:54.927546  109844 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:02:54.927690  109844 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:02:55.429241  109844 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.893032ms
	I1002 21:02:55.432296  109844 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:02:55.432407  109844 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	I1002 21:02:55.432483  109844 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:02:55.432583  109844 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:06:55.432671  109844 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	I1002 21:06:55.432869  109844 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	I1002 21:06:55.432961  109844 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	I1002 21:06:55.432968  109844 kubeadm.go:318] 
	I1002 21:06:55.433037  109844 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:06:55.433100  109844 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:06:55.433168  109844 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:06:55.433259  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:06:55.433328  109844 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:06:55.433419  109844 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:06:55.433434  109844 kubeadm.go:318] 
	I1002 21:06:55.436835  109844 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:06:55.436949  109844 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:06:55.437474  109844 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:06:55.437568  109844 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:06:55.437594  109844 kubeadm.go:402] duration metric: took 12m8.007755847s to StartCluster
	I1002 21:06:55.437641  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:06:55.437710  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:06:55.464382  109844 cri.go:89] found id: ""
	I1002 21:06:55.464398  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.464404  109844 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:06:55.464409  109844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:06:55.464469  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:06:55.490606  109844 cri.go:89] found id: ""
	I1002 21:06:55.490623  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.490633  109844 logs.go:284] No container was found matching "etcd"
	I1002 21:06:55.490638  109844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:06:55.490702  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:06:55.516529  109844 cri.go:89] found id: ""
	I1002 21:06:55.516547  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.516560  109844 logs.go:284] No container was found matching "coredns"
	I1002 21:06:55.516565  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:06:55.516631  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:06:55.542896  109844 cri.go:89] found id: ""
	I1002 21:06:55.542913  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.542919  109844 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:06:55.542926  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:06:55.542976  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:06:55.570192  109844 cri.go:89] found id: ""
	I1002 21:06:55.570206  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.570212  109844 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:06:55.570217  109844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:06:55.570263  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:06:55.596069  109844 cri.go:89] found id: ""
	I1002 21:06:55.596092  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.596102  109844 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:06:55.596107  109844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:06:55.596157  109844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:06:55.621555  109844 cri.go:89] found id: ""
	I1002 21:06:55.621572  109844 logs.go:282] 0 containers: []
	W1002 21:06:55.621579  109844 logs.go:284] No container was found matching "kindnet"
	I1002 21:06:55.621587  109844 logs.go:123] Gathering logs for dmesg ...
	I1002 21:06:55.621600  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1002 21:06:55.635371  109844 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:06:55.635389  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:06:55.691316  109844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:06:55.684497   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.685072   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.686619   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.687074   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.688662   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:06:55.684497   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.685072   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.686619   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.687074   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:06:55.688662   15582 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:06:55.691337  109844 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:06:55.691347  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:06:55.755862  109844 logs.go:123] Gathering logs for container status ...
	I1002 21:06:55.755886  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:06:55.784730  109844 logs.go:123] Gathering logs for kubelet ...
	I1002 21:06:55.784767  109844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1002 21:06:55.854494  109844 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:06:55.854545  109844 out.go:285] * 
	W1002 21:06:55.854631  109844 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:06:55.854657  109844 out.go:285] * 
	W1002 21:06:55.856372  109844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:06:55.860308  109844 out.go:203] 
	W1002 21:06:55.861642  109844 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.893032ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8441/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.000136441s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000498554s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000589125s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.49.2:8441/livez: Get "https://control-plane.minikube.internal:8441/livez?timeout=10s": dial tcp 192.168.49.2:8441: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:06:55.861662  109844 out.go:285] * 
	I1002 21:06:55.863851  109844 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.229621183Z" level=info msg="createCtr: removing container 1beefe15b63b796e652c01ac1f61b13690321cfccbd88674e7a5b2a56d2579c4" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.229659341Z" level=info msg="createCtr: deleting container 1beefe15b63b796e652c01ac1f61b13690321cfccbd88674e7a5b2a56d2579c4 from storage" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:06:55 functional-012915 crio[5820]: time="2025-10-02T21:06:55.231972859Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-012915_kube-system_d8a261ecdc32dae77705c4d6c0276f2f_0" id=418d1224-9f9d-40f5-a409-fe068d8d8eca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.205202556Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=db075587-8f32-464c-9e5e-46c1b2623e7b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.206210632Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=95dc66c0-7314-42be-9120-81260968bf88 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.207175944Z" level=info msg="Creating container: kube-system/kube-controller-manager-functional-012915/kube-controller-manager" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.207440039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.212436931Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.212976081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.231136681Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.232746016Z" level=info msg="createCtr: deleting container ID 940deb61e07e3c430096de3c07f5adf9446cf8c0b1ea53018286d264947b97eb from idIndex" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.232798364Z" level=info msg="createCtr: removing container 940deb61e07e3c430096de3c07f5adf9446cf8c0b1ea53018286d264947b97eb" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.232834131Z" level=info msg="createCtr: deleting container 940deb61e07e3c430096de3c07f5adf9446cf8c0b1ea53018286d264947b97eb from storage" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:00 functional-012915 crio[5820]: time="2025-10-02T21:07:00.234843413Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-functional-012915_kube-system_7e750209f40bc1241cc38d19476e612c_0" id=9ba99aa9-d457-4ab7-bafe-75e1d1d3e2e6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.205198785Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c19f64ff-9f66-4a07-ad68-475d90819996 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.206616651Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=0944eb73-36b5-4739-b2a1-da68c935ff0a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.20799091Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-012915/kube-apiserver" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.208380884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.214618925Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.215607645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.2302641Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.231957623Z" level=info msg="createCtr: deleting container ID b28bd02bfbafe506bc770bf054febc7e12b50c57efb3b0059baa9489b9a0e394 from idIndex" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.232026593Z" level=info msg="createCtr: removing container b28bd02bfbafe506bc770bf054febc7e12b50c57efb3b0059baa9489b9a0e394" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.232070563Z" level=info msg="createCtr: deleting container b28bd02bfbafe506bc770bf054febc7e12b50c57efb3b0059baa9489b9a0e394 from storage" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.236500722Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-012915_kube-system_7482f03c4ea15852236655655d7fae39_0" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:07:06.324940   16855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:06.325431   16855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:06.326987   16855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:06.327404   16855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:06.328912   16855 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:07:06 up  2:49,  0 user,  load average: 1.19, 0.28, 0.26
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:06:58 functional-012915 kubelet[14964]: E1002 21:06:58.830030   14964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 21:06:58 functional-012915 kubelet[14964]: I1002 21:06:58.986288   14964 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 21:06:58 functional-012915 kubelet[14964]: E1002 21:06:58.986748   14964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 21:07:00 functional-012915 kubelet[14964]: E1002 21:07:00.204682   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:07:00 functional-012915 kubelet[14964]: E1002 21:07:00.235123   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:07:00 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:00 functional-012915 kubelet[14964]:  > podSandboxID="78541c97616f3ec4e232f9ab35845168ea396e7284f2b19d4d8b8efd1c5094a2"
	Oct 02 21:07:00 functional-012915 kubelet[14964]: E1002 21:07:00.235224   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:07:00 functional-012915 kubelet[14964]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-012915_kube-system(7e750209f40bc1241cc38d19476e612c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:00 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:00 functional-012915 kubelet[14964]: E1002 21:07:00.235258   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-012915" podUID="7e750209f40bc1241cc38d19476e612c"
	Oct 02 21:07:01 functional-012915 kubelet[14964]: E1002 21:07:01.168800   14964 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8441/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 21:07:01 functional-012915 kubelet[14964]: E1002 21:07:01.351347   14964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac86d10977047  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,LastTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-012915,}"
	Oct 02 21:07:03 functional-012915 kubelet[14964]: E1002 21:07:03.204593   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:07:03 functional-012915 kubelet[14964]: E1002 21:07:03.236928   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:07:03 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:03 functional-012915 kubelet[14964]:  > podSandboxID="a129e9a2f94a7f43841dcb70e9f797b91d229fda437bd3abc02ab094cc4c3749"
	Oct 02 21:07:03 functional-012915 kubelet[14964]: E1002 21:07:03.237038   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:07:03 functional-012915 kubelet[14964]:         container kube-apiserver start failed in pod kube-apiserver-functional-012915_kube-system(7482f03c4ea15852236655655d7fae39): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:03 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:03 functional-012915 kubelet[14964]: E1002 21:07:03.237078   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-012915" podUID="7482f03c4ea15852236655655d7fae39"
	Oct 02 21:07:05 functional-012915 kubelet[14964]: E1002 21:07:05.219572   14964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-012915\" not found"
	Oct 02 21:07:05 functional-012915 kubelet[14964]: E1002 21:07:05.831442   14964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 21:07:05 functional-012915 kubelet[14964]: I1002 21:07:05.988113   14964 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 21:07:05 functional-012915 kubelet[14964]: E1002 21:07:05.988444   14964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (328.641046ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (241.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 21:07:25.750530   84100 retry.go:31] will retry after 18.2520184s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 21:07:44.003080   84100 retry.go:31] will retry after 21.847233925s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1002 21:08:05.850809   84100 retry.go:31] will retry after 36.213656004s: Temporary Error: Get "http:": http: no Host in request URL
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (296.221759ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (294.245503ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-012915 ssh findmnt -T /mount1                                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ mount          │ -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount1 --alsologtostderr -v=1 │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ mount          │ -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount2 --alsologtostderr -v=1 │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh            │ functional-012915 ssh sudo cat /etc/ssl/certs/84100.pem                                                           │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh sudo cat /usr/share/ca-certificates/84100.pem                                               │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh sudo cat /etc/ssl/certs/51391683.0                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh findmnt -T /mount1                                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh sudo cat /etc/ssl/certs/841002.pem                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh findmnt -T /mount2                                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh sudo cat /usr/share/ca-certificates/841002.pem                                              │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh findmnt -T /mount3                                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ mount          │ -p functional-012915 --kill=true                                                                                  │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-012915 --alsologtostderr -v=1                                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh            │ functional-012915 ssh sudo cat /etc/test/nested/copy/84100/hosts                                                  │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls --format short --alsologtostderr                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls --format json --alsologtostderr                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls --format table --alsologtostderr                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls --format yaml --alsologtostderr                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh pgrep buildkitd                                                                             │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                           │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                           │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                           │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:07:06
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:07:06.995028  127793 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:07:06.995116  127793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.995122  127793 out.go:374] Setting ErrFile to fd 2...
	I1002 21:07:06.995128  127793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.995487  127793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:07:06.995965  127793 out.go:368] Setting JSON to false
	I1002 21:07:06.996965  127793 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10168,"bootTime":1759429059,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:07:06.997080  127793 start.go:140] virtualization: kvm guest
	I1002 21:07:06.999028  127793 out.go:179] * [functional-012915] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 21:07:07.000503  127793 notify.go:220] Checking for updates...
	I1002 21:07:07.000539  127793 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:07:07.002031  127793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:07:07.003411  127793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:07:07.004900  127793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:07:07.006037  127793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:07:07.007128  127793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:07:07.008912  127793 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:07.009362  127793 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:07:07.034759  127793 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:07:07.034869  127793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:07:07.097804  127793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:07:07.08803788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:07:07.097914  127793 docker.go:318] overlay module found
	I1002 21:07:07.101324  127793 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 21:07:07.102629  127793 start.go:304] selected driver: docker
	I1002 21:07:07.102647  127793 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:07:07.102753  127793 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:07:07.104576  127793 out.go:203] 
	W1002 21:07:07.105751  127793 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 21:07:07.107027  127793 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:10:59 functional-012915 crio[5820]: time="2025-10-02T21:10:59.228345688Z" level=info msg="createCtr: removing container 912a1f2cd66f01e9d184967d95483ad42f59190b85db4b8a2e5d256bb70e3c77" id=10df5ffe-e9b5-4173-b853-1743e7e02051 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:10:59 functional-012915 crio[5820]: time="2025-10-02T21:10:59.228382011Z" level=info msg="createCtr: deleting container 912a1f2cd66f01e9d184967d95483ad42f59190b85db4b8a2e5d256bb70e3c77 from storage" id=10df5ffe-e9b5-4173-b853-1743e7e02051 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:10:59 functional-012915 crio[5820]: time="2025-10-02T21:10:59.230518578Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-012915_kube-system_d8a261ecdc32dae77705c4d6c0276f2f_0" id=10df5ffe-e9b5-4173-b853-1743e7e02051 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:02 functional-012915 crio[5820]: time="2025-10-02T21:11:02.205153975Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=043b965f-491f-4706-a9f1-c079baf0959b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:11:02 functional-012915 crio[5820]: time="2025-10-02T21:11:02.206176788Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=345251ab-a7d8-4372-bd5b-a8418cdf850a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:11:02 functional-012915 crio[5820]: time="2025-10-02T21:11:02.207247591Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-012915/kube-apiserver" id=12f82a67-0a01-44ae-8023-7735e0a14c35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:02 functional-012915 crio[5820]: time="2025-10-02T21:11:02.20748732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:11:02 functional-012915 crio[5820]: time="2025-10-02T21:11:02.210924299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:11:02 functional-012915 crio[5820]: time="2025-10-02T21:11:02.211516972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:11:02 functional-012915 crio[5820]: time="2025-10-02T21:11:02.231012626Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=12f82a67-0a01-44ae-8023-7735e0a14c35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:02 functional-012915 crio[5820]: time="2025-10-02T21:11:02.23241784Z" level=info msg="createCtr: deleting container ID 4fa5f84ebe83cfdca65d70ae6efa7721f1459f6d8c8a40638459652ecc5dacf6 from idIndex" id=12f82a67-0a01-44ae-8023-7735e0a14c35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:02 functional-012915 crio[5820]: time="2025-10-02T21:11:02.232459797Z" level=info msg="createCtr: removing container 4fa5f84ebe83cfdca65d70ae6efa7721f1459f6d8c8a40638459652ecc5dacf6" id=12f82a67-0a01-44ae-8023-7735e0a14c35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:02 functional-012915 crio[5820]: time="2025-10-02T21:11:02.232493814Z" level=info msg="createCtr: deleting container 4fa5f84ebe83cfdca65d70ae6efa7721f1459f6d8c8a40638459652ecc5dacf6 from storage" id=12f82a67-0a01-44ae-8023-7735e0a14c35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:02 functional-012915 crio[5820]: time="2025-10-02T21:11:02.234550075Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-012915_kube-system_7482f03c4ea15852236655655d7fae39_0" id=12f82a67-0a01-44ae-8023-7735e0a14c35 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:03 functional-012915 crio[5820]: time="2025-10-02T21:11:03.205080595Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=fbb5f67d-08eb-4ea3-81d9-b341a5216e95 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:11:03 functional-012915 crio[5820]: time="2025-10-02T21:11:03.206049319Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=217d30d5-bb93-4bff-868d-e01d940e0495 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:11:03 functional-012915 crio[5820]: time="2025-10-02T21:11:03.207098456Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-012915/kube-scheduler" id=c5dd7a52-9fdb-4858-a066-e030ca54fcb9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:03 functional-012915 crio[5820]: time="2025-10-02T21:11:03.207324703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:11:03 functional-012915 crio[5820]: time="2025-10-02T21:11:03.210381956Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:11:03 functional-012915 crio[5820]: time="2025-10-02T21:11:03.21080664Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:11:03 functional-012915 crio[5820]: time="2025-10-02T21:11:03.226131998Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c5dd7a52-9fdb-4858-a066-e030ca54fcb9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:03 functional-012915 crio[5820]: time="2025-10-02T21:11:03.227532771Z" level=info msg="createCtr: deleting container ID 6e2ef4f4f470f4b1c7911b847120fa49ef016f9c7892cfd74ac624bf251696f6 from idIndex" id=c5dd7a52-9fdb-4858-a066-e030ca54fcb9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:03 functional-012915 crio[5820]: time="2025-10-02T21:11:03.227573996Z" level=info msg="createCtr: removing container 6e2ef4f4f470f4b1c7911b847120fa49ef016f9c7892cfd74ac624bf251696f6" id=c5dd7a52-9fdb-4858-a066-e030ca54fcb9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:03 functional-012915 crio[5820]: time="2025-10-02T21:11:03.227610721Z" level=info msg="createCtr: deleting container 6e2ef4f4f470f4b1c7911b847120fa49ef016f9c7892cfd74ac624bf251696f6 from storage" id=c5dd7a52-9fdb-4858-a066-e030ca54fcb9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:11:03 functional-012915 crio[5820]: time="2025-10-02T21:11:03.229652524Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-012915_kube-system_8a66ab49d7c80b396ab0e8b46c39b696_0" id=c5dd7a52-9fdb-4858-a066-e030ca54fcb9 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:11:04.588535   19228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:11:04.589135   19228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:11:04.590714   19228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:11:04.591207   19228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:11:04.592851   19228 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:11:04 up  2:53,  0 user,  load average: 0.03, 0.18, 0.22
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:10:59 functional-012915 kubelet[14964]: E1002 21:10:59.230915   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:10:59 functional-012915 kubelet[14964]:         container etcd start failed in pod etcd-functional-012915_kube-system(d8a261ecdc32dae77705c4d6c0276f2f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:10:59 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:10:59 functional-012915 kubelet[14964]: E1002 21:10:59.230950   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-012915" podUID="d8a261ecdc32dae77705c4d6c0276f2f"
	Oct 02 21:11:02 functional-012915 kubelet[14964]: E1002 21:11:02.204638   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:11:02 functional-012915 kubelet[14964]: E1002 21:11:02.234918   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:11:02 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:11:02 functional-012915 kubelet[14964]:  > podSandboxID="a129e9a2f94a7f43841dcb70e9f797b91d229fda437bd3abc02ab094cc4c3749"
	Oct 02 21:11:02 functional-012915 kubelet[14964]: E1002 21:11:02.235028   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:11:02 functional-012915 kubelet[14964]:         container kube-apiserver start failed in pod kube-apiserver-functional-012915_kube-system(7482f03c4ea15852236655655d7fae39): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:11:02 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:11:02 functional-012915 kubelet[14964]: E1002 21:11:02.235061   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-012915" podUID="7482f03c4ea15852236655655d7fae39"
	Oct 02 21:11:03 functional-012915 kubelet[14964]: E1002 21:11:03.204579   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:11:03 functional-012915 kubelet[14964]: E1002 21:11:03.229958   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:11:03 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:11:03 functional-012915 kubelet[14964]:  > podSandboxID="8fcd09580c94c358972341d218f18641fb01c2881f93974b0a738c79d068fdb3"
	Oct 02 21:11:03 functional-012915 kubelet[14964]: E1002 21:11:03.230057   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:11:03 functional-012915 kubelet[14964]:         container kube-scheduler start failed in pod kube-scheduler-functional-012915_kube-system(8a66ab49d7c80b396ab0e8b46c39b696): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:11:03 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:11:03 functional-012915 kubelet[14964]: E1002 21:11:03.230085   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-012915" podUID="8a66ab49d7c80b396ab0e8b46c39b696"
	Oct 02 21:11:03 functional-012915 kubelet[14964]: E1002 21:11:03.451773   14964 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 02 21:11:03 functional-012915 kubelet[14964]: E1002 21:11:03.773776   14964 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8441/api/v1/namespaces/default/events/functional-012915.186ac86d10974d1c\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac86d10974d1c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node functional-012915 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 21:02:55.196941596 +0000 UTC m=+0.268988444,LastTimestamp:2025-10-02 21:02:55.199851083 +0000 UTC m=+0.271897939,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,Repo
rtingInstance:functional-012915,}"
	Oct 02 21:11:03 functional-012915 kubelet[14964]: E1002 21:11:03.869684   14964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 21:11:04 functional-012915 kubelet[14964]: I1002 21:11:04.061831   14964 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 21:11:04 functional-012915 kubelet[14964]: E1002 21:11:04.062239   14964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (291.667635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (241.53s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-012915 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-012915 replace --force -f testdata/mysql.yaml: exit status 1 (49.643314ms)

                                                
                                                
** stderr ** 
	E1002 21:07:14.985055  132357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:14.985626  132357 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused
	unable to recognize "testdata/mysql.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-012915 replace --force -f testdata/mysql.yaml" failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (304.459908ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-012915 image ls                                                                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image     │ functional-012915 image save kicbase/echo-server:functional-012915 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh findmnt -T /mount-9p | grep 9p                                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image     │ functional-012915 image rm kicbase/echo-server:functional-012915 --alsologtostderr                                                                              │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh -- ls -la /mount-9p                                                                                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image     │ functional-012915 image ls                                                                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo umount -f /mount-9p                                                                                                                  │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ image     │ functional-012915 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image     │ functional-012915 image save --daemon kicbase/echo-server:functional-012915 --alsologtostderr                                                                   │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ mount     │ -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount3 --alsologtostderr -v=1                                               │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh       │ functional-012915 ssh findmnt -T /mount1                                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ mount     │ -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount1 --alsologtostderr -v=1                                               │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ mount     │ -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount2 --alsologtostderr -v=1                                               │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh       │ functional-012915 ssh sudo cat /etc/ssl/certs/84100.pem                                                                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo cat /usr/share/ca-certificates/84100.pem                                                                                             │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh findmnt -T /mount1                                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo cat /etc/ssl/certs/841002.pem                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh findmnt -T /mount2                                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo cat /usr/share/ca-certificates/841002.pem                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh findmnt -T /mount3                                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh       │ functional-012915 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ mount     │ -p functional-012915 --kill=true                                                                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-012915 --alsologtostderr -v=1                                                                                                  │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh       │ functional-012915 ssh sudo cat /etc/test/nested/copy/84100/hosts                                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:07:06
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:07:06.995028  127793 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:07:06.995116  127793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.995122  127793 out.go:374] Setting ErrFile to fd 2...
	I1002 21:07:06.995128  127793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.995487  127793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:07:06.995965  127793 out.go:368] Setting JSON to false
	I1002 21:07:06.996965  127793 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10168,"bootTime":1759429059,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:07:06.997080  127793 start.go:140] virtualization: kvm guest
	I1002 21:07:06.999028  127793 out.go:179] * [functional-012915] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 21:07:07.000503  127793 notify.go:220] Checking for updates...
	I1002 21:07:07.000539  127793 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:07:07.002031  127793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:07:07.003411  127793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:07:07.004900  127793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:07:07.006037  127793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:07:07.007128  127793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:07:07.008912  127793 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:07.009362  127793 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:07:07.034759  127793 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:07:07.034869  127793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:07:07.097804  127793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:07:07.08803788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:07:07.097914  127793 docker.go:318] overlay module found
	I1002 21:07:07.101324  127793 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 21:07:07.102629  127793 start.go:304] selected driver: docker
	I1002 21:07:07.102647  127793 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:07:07.102753  127793 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:07:07.104576  127793 out.go:203] 
	W1002 21:07:07.105751  127793 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 21:07:07.107027  127793 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.32623916Z" level=info msg="Checking image status: kicbase/echo-server:functional-012915" id=652a0cc7-96b4-4cc9-8770-7f90889ad5d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.352056366Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-012915" id=67615c5c-a803-418e-8082-ace67677acff name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.352207943Z" level=info msg="Image docker.io/kicbase/echo-server:functional-012915 not found" id=67615c5c-a803-418e-8082-ace67677acff name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.352266249Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-012915 found" id=67615c5c-a803-418e-8082-ace67677acff name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.3774175Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-012915" id=03fbb612-5013-42b0-9112-00cbe7a10b30 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.377551842Z" level=info msg="Image localhost/kicbase/echo-server:functional-012915 not found" id=03fbb612-5013-42b0-9112-00cbe7a10b30 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:11 functional-012915 crio[5820]: time="2025-10-02T21:07:11.377589602Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-012915 found" id=03fbb612-5013-42b0-9112-00cbe7a10b30 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.122395444Z" level=info msg="Checking image status: kicbase/echo-server:functional-012915" id=609c7946-812d-4f23-ac6b-eb4871d7fd4d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.150148722Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-012915" id=e5777318-5d36-425b-bafc-b2846988e349 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.15030666Z" level=info msg="Image docker.io/kicbase/echo-server:functional-012915 not found" id=e5777318-5d36-425b-bafc-b2846988e349 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.150356912Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-012915 found" id=e5777318-5d36-425b-bafc-b2846988e349 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.176254507Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-012915" id=0cfac42e-cb59-453e-b903-9005525a62f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.176383818Z" level=info msg="Image localhost/kicbase/echo-server:functional-012915 not found" id=0cfac42e-cb59-453e-b903-9005525a62f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:12 functional-012915 crio[5820]: time="2025-10-02T21:07:12.176415337Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-012915 found" id=0cfac42e-cb59-453e-b903-9005525a62f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.205147829Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=699b8931-988e-4b9f-8a6c-fc8b4bcc55ac name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.206204507Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=e0dcae5d-4574-432e-8c24-ebc3abdfca4b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.207417759Z" level=info msg="Creating container: kube-system/kube-apiserver-functional-012915/kube-apiserver" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.207646949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.212703465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.213358544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.229672236Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.231295319Z" level=info msg="createCtr: deleting container ID 39ffb395332f78455dbf35a6e7a05d6bf475503d305ffc3851e1d9eacd3f111e from idIndex" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.231345788Z" level=info msg="createCtr: removing container 39ffb395332f78455dbf35a6e7a05d6bf475503d305ffc3851e1d9eacd3f111e" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.231399172Z" level=info msg="createCtr: deleting container 39ffb395332f78455dbf35a6e7a05d6bf475503d305ffc3851e1d9eacd3f111e from storage" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:14 functional-012915 crio[5820]: time="2025-10-02T21:07:14.234646211Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-012915_kube-system_7482f03c4ea15852236655655d7fae39_0" id=f4b5f86e-d258-45a8-a624-5958c5a66c75 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:07:15.883279   18069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:15.883901   18069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:15.885451   18069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:15.885915   18069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:15.887491   18069 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:07:15 up  2:49,  0 user,  load average: 1.54, 0.39, 0.29
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:07:08 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:08 functional-012915 kubelet[14964]: E1002 21:07:08.237406   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-012915" podUID="8a66ab49d7c80b396ab0e8b46c39b696"
	Oct 02 21:07:09 functional-012915 kubelet[14964]: E1002 21:07:09.808618   14964 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8441/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 21:07:11 functional-012915 kubelet[14964]: E1002 21:07:11.205836   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:07:11 functional-012915 kubelet[14964]: E1002 21:07:11.237468   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:07:11 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:11 functional-012915 kubelet[14964]:  > podSandboxID="78541c97616f3ec4e232f9ab35845168ea396e7284f2b19d4d8b8efd1c5094a2"
	Oct 02 21:07:11 functional-012915 kubelet[14964]: E1002 21:07:11.237611   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:07:11 functional-012915 kubelet[14964]:         container kube-controller-manager start failed in pod kube-controller-manager-functional-012915_kube-system(7e750209f40bc1241cc38d19476e612c): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:11 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:11 functional-012915 kubelet[14964]: E1002 21:07:11.237648   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-functional-012915" podUID="7e750209f40bc1241cc38d19476e612c"
	Oct 02 21:07:11 functional-012915 kubelet[14964]: E1002 21:07:11.352873   14964 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8441/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8441: connect: connection refused" event="&Event{ObjectMeta:{functional-012915.186ac86d10977047  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:functional-012915,UID:functional-012915,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node functional-012915 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:functional-012915,},FirstTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,LastTimestamp:2025-10-02 21:02:55.196950599 +0000 UTC m=+0.268997447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-012915,}"
	Oct 02 21:07:12 functional-012915 kubelet[14964]: E1002 21:07:12.832640   14964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 21:07:12 functional-012915 kubelet[14964]: I1002 21:07:12.991217   14964 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 21:07:12 functional-012915 kubelet[14964]: E1002 21:07:12.991663   14964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 21:07:14 functional-012915 kubelet[14964]: E1002 21:07:14.029458   14964 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-012915&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 21:07:14 functional-012915 kubelet[14964]: E1002 21:07:14.204552   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:07:14 functional-012915 kubelet[14964]: E1002 21:07:14.235000   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:07:14 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:14 functional-012915 kubelet[14964]:  > podSandboxID="a129e9a2f94a7f43841dcb70e9f797b91d229fda437bd3abc02ab094cc4c3749"
	Oct 02 21:07:14 functional-012915 kubelet[14964]: E1002 21:07:14.235109   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:07:14 functional-012915 kubelet[14964]:         container kube-apiserver start failed in pod kube-apiserver-functional-012915_kube-system(7482f03c4ea15852236655655d7fae39): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:14 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:14 functional-012915 kubelet[14964]: E1002 21:07:14.235153   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-012915" podUID="7482f03c4ea15852236655655d7fae39"
	Oct 02 21:07:15 functional-012915 kubelet[14964]: E1002 21:07:15.220732   14964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-012915\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (309.923679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/MySQL (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-012915 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-012915 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (49.740452ms)

                                                
                                                
** stderr ** 
	E1002 21:07:07.695782  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.696133  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697592  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697903  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.699324  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-012915 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1002 21:07:07.695782  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.696133  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697592  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697903  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.699324  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1002 21:07:07.695782  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.696133  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697592  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697903  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.699324  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1002 21:07:07.695782  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.696133  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697592  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697903  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.699324  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1002 21:07:07.695782  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.696133  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697592  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697903  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.699324  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1002 21:07:07.695782  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.696133  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697592  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.697903  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:07:07.699324  128164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-012915
helpers_test.go:243: (dbg) docker inspect functional-012915:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	        "Created": "2025-10-02T20:40:11.66855926Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 98601,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T20:40:11.708659535Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hostname",
	        "HostsPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/hosts",
	        "LogPath": "/var/lib/docker/containers/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f/563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f-json.log",
	        "Name": "/functional-012915",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-012915:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-012915",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "563755a7f6599a5d21cd5fb8f2adc6b8a7d19bd6b78c86f3049015475d91278f",
	                "LowerDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aff4026a144db99d7dfb744e2ad9c45068f81611846acc5d2f3c2969158f4966/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-012915",
	                "Source": "/var/lib/docker/volumes/functional-012915/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-012915",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-012915",
	                "name.minikube.sigs.k8s.io": "functional-012915",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cae76b0b611dbd364f6e869c5e756c2af454b41ea9a417238cc4520b3af9cc82",
	            "SandboxKey": "/var/run/docker/netns/cae76b0b611d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-012915": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:fa:42:26:0e:8d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6100222e6e4810a153001d9a8bc20431cd793abd90f3cc50aabc4d86eec4683d",
	                    "EndpointID": "3980fa0a05a9a8d5f7fe5f6dd0a25ae6c4223393fe268c9f33f049a8e5570a4b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-012915",
	                        "563755a7f659"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p functional-012915 -n functional-012915: exit status 2 (291.936973ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs -n 25
helpers_test.go:260: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ config  │ functional-012915 config get cpus                                                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ config  │ functional-012915 config unset cpus                                                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh     │ functional-012915 ssh -n functional-012915 sudo cat /home/docker/cp-test.txt                                              │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ config  │ functional-012915 config get cpus                                                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ service │ functional-012915 service list -o json                                                                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh     │ functional-012915 ssh echo hello                                                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ cp      │ functional-012915 cp functional-012915:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd418601657/001/cp-test.txt │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ service │ functional-012915 service --namespace=default --https --url hello-node                                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh     │ functional-012915 ssh cat /etc/hostname                                                                                   │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh     │ functional-012915 ssh -n functional-012915 sudo cat /home/docker/cp-test.txt                                              │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ service │ functional-012915 service hello-node --url --format={{.IP}}                                                               │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ tunnel  │ functional-012915 tunnel --alsologtostderr                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ tunnel  │ functional-012915 tunnel --alsologtostderr                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ service │ functional-012915 service hello-node --url                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ cp      │ functional-012915 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                 │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ tunnel  │ functional-012915 tunnel --alsologtostderr                                                                                │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh     │ functional-012915 ssh -n functional-012915 sudo cat /tmp/does/not/exist/cp-test.txt                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ addons  │ functional-012915 addons list                                                                                             │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ addons  │ functional-012915 addons list -o json                                                                                     │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ start   │ -p functional-012915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                 │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ start   │ -p functional-012915 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                           │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ start   │ -p functional-012915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                 │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ license │                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh     │ functional-012915 ssh sudo systemctl is-active docker                                                                     │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh     │ functional-012915 ssh sudo systemctl is-active containerd                                                                 │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:07:06
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:07:06.995028  127793 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:07:06.995116  127793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.995122  127793 out.go:374] Setting ErrFile to fd 2...
	I1002 21:07:06.995128  127793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.995487  127793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:07:06.995965  127793 out.go:368] Setting JSON to false
	I1002 21:07:06.996965  127793 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10168,"bootTime":1759429059,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:07:06.997080  127793 start.go:140] virtualization: kvm guest
	I1002 21:07:06.999028  127793 out.go:179] * [functional-012915] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 21:07:07.000503  127793 notify.go:220] Checking for updates...
	I1002 21:07:07.000539  127793 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:07:07.002031  127793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:07:07.003411  127793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:07:07.004900  127793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:07:07.006037  127793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:07:07.007128  127793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:07:07.008912  127793 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:07.009362  127793 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:07:07.034759  127793 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:07:07.034869  127793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:07:07.097804  127793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:07:07.08803788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:07:07.097914  127793 docker.go:318] overlay module found
	I1002 21:07:07.101324  127793 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 21:07:07.102629  127793 start.go:304] selected driver: docker
	I1002 21:07:07.102647  127793 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:07:07.102753  127793 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:07:07.104576  127793 out.go:203] 
	W1002 21:07:07.105751  127793 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 21:07:07.107027  127793 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.232026593Z" level=info msg="createCtr: removing container b28bd02bfbafe506bc770bf054febc7e12b50c57efb3b0059baa9489b9a0e394" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.232070563Z" level=info msg="createCtr: deleting container b28bd02bfbafe506bc770bf054febc7e12b50c57efb3b0059baa9489b9a0e394 from storage" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:03 functional-012915 crio[5820]: time="2025-10-02T21:07:03.236500722Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-functional-012915_kube-system_7482f03c4ea15852236655655d7fae39_0" id=8ceb986f-2d0d-472e-895d-d77cce14331e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.205112276Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c3befc47-9bfa-4aa8-b747-756b48e93e52 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.20511383Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=99fc77f4-7341-4f8a-86ae-a5ab5c34bdbb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.206049366Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=1d70f6ca-3bd0-4d46-a462-2ec9b9b7fee2 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.2061021Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=b9e2894e-85a8-4ffd-bfd3-a8416ae915a9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.206957752Z" level=info msg="Creating container: kube-system/etcd-functional-012915/etcd" id=7c38f3b7-7d09-4743-872a-9b0543a3e017 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.207109626Z" level=info msg="Creating container: kube-system/kube-scheduler-functional-012915/kube-scheduler" id=58a09d77-7b0f-4177-aa03-5e52dbafd5ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.207244388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.207337058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.211942542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.212358715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.213452201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.213883706Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.230425915Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=7c38f3b7-7d09-4743-872a-9b0543a3e017 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.231723075Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=58a09d77-7b0f-4177-aa03-5e52dbafd5ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.232093617Z" level=info msg="createCtr: deleting container ID fd73553fd5ed22fd4c80c395891619cf791de388dac55da965b8be6b9bbcc623 from idIndex" id=7c38f3b7-7d09-4743-872a-9b0543a3e017 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.232130925Z" level=info msg="createCtr: removing container fd73553fd5ed22fd4c80c395891619cf791de388dac55da965b8be6b9bbcc623" id=7c38f3b7-7d09-4743-872a-9b0543a3e017 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.232168364Z" level=info msg="createCtr: deleting container fd73553fd5ed22fd4c80c395891619cf791de388dac55da965b8be6b9bbcc623 from storage" id=7c38f3b7-7d09-4743-872a-9b0543a3e017 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.233578391Z" level=info msg="createCtr: deleting container ID 79000daf05b23e4426439c97ab4f930fe9011266b2a82829818bf9e9421c526c from idIndex" id=58a09d77-7b0f-4177-aa03-5e52dbafd5ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.23361662Z" level=info msg="createCtr: removing container 79000daf05b23e4426439c97ab4f930fe9011266b2a82829818bf9e9421c526c" id=58a09d77-7b0f-4177-aa03-5e52dbafd5ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.233655042Z" level=info msg="createCtr: deleting container 79000daf05b23e4426439c97ab4f930fe9011266b2a82829818bf9e9421c526c from storage" id=58a09d77-7b0f-4177-aa03-5e52dbafd5ce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.235587907Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-functional-012915_kube-system_d8a261ecdc32dae77705c4d6c0276f2f_0" id=7c38f3b7-7d09-4743-872a-9b0543a3e017 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:07:08 functional-012915 crio[5820]: time="2025-10-02T21:07:08.235964526Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-functional-012915_kube-system_8a66ab49d7c80b396ab0e8b46c39b696_0" id=58a09d77-7b0f-4177-aa03-5e52dbafd5ce name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:07:08.572638   17058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:08.573192   17058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:08.574611   17058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:08.575134   17058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1002 21:07:08.576776   17058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:07:08 up  2:49,  0 user,  load average: 1.41, 0.35, 0.28
	Linux functional-012915 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:07:03 functional-012915 kubelet[14964]:  > podSandboxID="a129e9a2f94a7f43841dcb70e9f797b91d229fda437bd3abc02ab094cc4c3749"
	Oct 02 21:07:03 functional-012915 kubelet[14964]: E1002 21:07:03.237038   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:07:03 functional-012915 kubelet[14964]:         container kube-apiserver start failed in pod kube-apiserver-functional-012915_kube-system(7482f03c4ea15852236655655d7fae39): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:03 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:03 functional-012915 kubelet[14964]: E1002 21:07:03.237078   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-functional-012915" podUID="7482f03c4ea15852236655655d7fae39"
	Oct 02 21:07:05 functional-012915 kubelet[14964]: E1002 21:07:05.219572   14964 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"functional-012915\" not found"
	Oct 02 21:07:05 functional-012915 kubelet[14964]: E1002 21:07:05.831442   14964 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-012915?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	Oct 02 21:07:05 functional-012915 kubelet[14964]: I1002 21:07:05.988113   14964 kubelet_node_status.go:75] "Attempting to register node" node="functional-012915"
	Oct 02 21:07:05 functional-012915 kubelet[14964]: E1002 21:07:05.988444   14964 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8441/api/v1/nodes\": dial tcp 192.168.49.2:8441: connect: connection refused" node="functional-012915"
	Oct 02 21:07:08 functional-012915 kubelet[14964]: E1002 21:07:08.204649   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:07:08 functional-012915 kubelet[14964]: E1002 21:07:08.204687   14964 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"functional-012915\" not found" node="functional-012915"
	Oct 02 21:07:08 functional-012915 kubelet[14964]: E1002 21:07:08.235881   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:07:08 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:08 functional-012915 kubelet[14964]:  > podSandboxID="0a35d159a682c6cd7da21a9fb2e3efef99f6f6c3f06af6071bd80e1de599842e"
	Oct 02 21:07:08 functional-012915 kubelet[14964]: E1002 21:07:08.236001   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:07:08 functional-012915 kubelet[14964]:         container etcd start failed in pod etcd-functional-012915_kube-system(d8a261ecdc32dae77705c4d6c0276f2f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:08 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:08 functional-012915 kubelet[14964]: E1002 21:07:08.236034   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-functional-012915" podUID="d8a261ecdc32dae77705c4d6c0276f2f"
	Oct 02 21:07:08 functional-012915 kubelet[14964]: E1002 21:07:08.236196   14964 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:07:08 functional-012915 kubelet[14964]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:08 functional-012915 kubelet[14964]:  > podSandboxID="8fcd09580c94c358972341d218f18641fb01c2881f93974b0a738c79d068fdb3"
	Oct 02 21:07:08 functional-012915 kubelet[14964]: E1002 21:07:08.236250   14964 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:07:08 functional-012915 kubelet[14964]:         container kube-scheduler start failed in pod kube-scheduler-functional-012915_kube-system(8a66ab49d7c80b396ab0e8b46c39b696): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:07:08 functional-012915 kubelet[14964]:  > logger="UnhandledError"
	Oct 02 21:07:08 functional-012915 kubelet[14964]: E1002 21:07:08.237406   14964 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-functional-012915" podUID="8a66ab49d7c80b396ab0e8b46c39b696"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-012915 -n functional-012915: exit status 2 (300.891437ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-012915" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-012915 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-012915 create deployment hello-node --image kicbase/echo-server: exit status 1 (62.688455ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-012915 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 service list: exit status 103 (295.947324ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-012915 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-012915"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-amd64 -p functional-012915 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-012915 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-012915\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 service list -o json: exit status 103 (300.575176ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-012915 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-012915"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-amd64 -p functional-012915 service list -o json": exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 service --namespace=default --https --url hello-node: exit status 103 (324.350964ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-012915 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-012915"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-012915 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 service hello-node --url --format={{.IP}}: exit status 103 (336.266176ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-012915 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-012915"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-012915 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-012915 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-012915\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-012915 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-012915 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1002 21:07:02.496549  125145 out.go:360] Setting OutFile to fd 1 ...
I1002 21:07:02.496938  125145 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:02.496952  125145 out.go:374] Setting ErrFile to fd 2...
I1002 21:07:02.496958  125145 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:02.497257  125145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
I1002 21:07:02.497681  125145 mustload.go:65] Loading cluster: functional-012915
I1002 21:07:02.498235  125145 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:02.498824  125145 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
I1002 21:07:02.528822  125145 host.go:66] Checking if "functional-012915" exists ...
I1002 21:07:02.529193  125145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 21:07:02.615890  125145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-02 21:07:02.603353772 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 21:07:02.616037  125145 api_server.go:166] Checking apiserver status ...
I1002 21:07:02.616097  125145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1002 21:07:02.616146  125145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
I1002 21:07:02.638579  125145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
W1002 21:07:02.754886  125145 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1002 21:07:02.756321  125145 out.go:179] * The control-plane node functional-012915 apiserver is not running: (state=Stopped)
I1002 21:07:02.757693  125145 out.go:179]   To start a cluster, run: "minikube start -p functional-012915"

                                                
                                                
stdout: * The control-plane node functional-012915 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-012915"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-012915 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-012915 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-012915 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-012915 tunnel --alsologtostderr] ...
helpers_test.go:519: unable to terminate pid 125144: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-012915 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-amd64 -p functional-012915 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 service hello-node --url: exit status 103 (317.973222ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-012915 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-012915"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-012915 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-012915 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-012915"
functional_test.go:1579: failed to parse "* The control-plane node functional-012915 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-012915\"": parse "* The control-plane node functional-012915 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-012915\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-012915 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-012915 apply -f testdata/testsvc.yaml: exit status 1 (63.458816ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-012915 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (99.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1002 21:07:02.839619   84100 retry.go:31] will retry after 1.978999445s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-012915 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-012915 get svc nginx-svc: exit status 1 (52.117364ms)

                                                
                                                
** stderr ** 
	E1002 21:08:42.111626  134767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:08:42.111983  134767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:08:42.113431  134767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:08:42.113833  134767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E1002 21:08:42.115213  134767 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-012915 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (99.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image load --daemon kicbase/echo-server:functional-012915 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-012915" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdany-port1720149524/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759439228944589422" to /tmp/TestFunctionalparallelMountCmdany-port1720149524/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759439228944589422" to /tmp/TestFunctionalparallelMountCmdany-port1720149524/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759439228944589422" to /tmp/TestFunctionalparallelMountCmdany-port1720149524/001/test-1759439228944589422
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.380507ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 21:07:09.222347   84100 retry.go:31] will retry after 377.71735ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 21:07 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 21:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 21:07 test-1759439228944589422
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh cat /mount-9p/test-1759439228944589422
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-012915 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-012915 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (46.170074ms)

                                                
                                                
** stderr ** 
	E1002 21:07:10.456638  129478 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.49.2:8441/api?timeout=32s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	error: unable to recognize "testdata/busybox-mount-test.yaml": Get "https://192.168.49.2:8441/api?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-012915 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (265.17934ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=42241)
	total 2
	-rw-r--r-- 1 docker docker 24 Oct  2 21:07 created-by-test
	-rw-r--r-- 1 docker docker 24 Oct  2 21:07 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Oct  2 21:07 test-1759439228944589422
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-012915 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdany-port1720149524/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdany-port1720149524/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port1720149524/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:42241
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port1720149524/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdany-port1720149524/001:/mount-9p --alsologtostderr -v=1] stderr:
I1002 21:07:08.994068  128683 out.go:360] Setting OutFile to fd 1 ...
I1002 21:07:08.994251  128683 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:08.994274  128683 out.go:374] Setting ErrFile to fd 2...
I1002 21:07:08.994280  128683 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:08.994646  128683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
I1002 21:07:08.995049  128683 mustload.go:65] Loading cluster: functional-012915
I1002 21:07:08.995615  128683 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:08.996218  128683 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
I1002 21:07:09.014704  128683 host.go:66] Checking if "functional-012915" exists ...
I1002 21:07:09.015037  128683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1002 21:07:09.075223  128683 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-02 21:07:09.06363111 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I1002 21:07:09.075385  128683 cli_runner.go:164] Run: docker network inspect functional-012915 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 21:07:09.098752  128683 out.go:179] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port1720149524/001 into VM as /mount-9p ...
I1002 21:07:09.100109  128683 out.go:179]   - Mount type:   9p
I1002 21:07:09.101406  128683 out.go:179]   - User ID:      docker
I1002 21:07:09.102838  128683 out.go:179]   - Group ID:     docker
I1002 21:07:09.104246  128683 out.go:179]   - Version:      9p2000.L
I1002 21:07:09.105507  128683 out.go:179]   - Message Size: 262144
I1002 21:07:09.106750  128683 out.go:179]   - Options:      map[]
I1002 21:07:09.107889  128683 out.go:179]   - Bind Address: 192.168.49.1:42241
I1002 21:07:09.109160  128683 out.go:179] * Userspace file server: 
I1002 21:07:09.109305  128683 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1002 21:07:09.109398  128683 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
I1002 21:07:09.129415  128683 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
I1002 21:07:09.234206  128683 mount.go:180] unmount for /mount-9p ran successfully
I1002 21:07:09.234240  128683 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1002 21:07:09.243415  128683 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=42241,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1002 21:07:09.286621  128683 main.go:125] stdlog: ufs.go:141 connected
I1002 21:07:09.292508  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tversion tag 65535 msize 262144 version '9P2000.L'
I1002 21:07:09.292566  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rversion tag 65535 msize 262144 version '9P2000'
I1002 21:07:09.292819  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1002 21:07:09.292904  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rattach tag 0 aqid (20fa273 a6c01010 'd')
I1002 21:07:09.293230  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 0
I1002 21:07:09.293449  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa273 a6c01010 'd') m d775 at 0 mt 1759439228 l 4096 t 0 d 0 ext )
I1002 21:07:09.294891  128683 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/.mount-process: {Name:mke11dea114a74be69ba1d52ec908a584efc2278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 21:07:09.295107  128683 mount.go:105] mount successful: ""
I1002 21:07:09.297263  128683 out.go:179] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port1720149524/001 to /mount-9p
I1002 21:07:09.298792  128683 out.go:203] 
I1002 21:07:09.299956  128683 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1002 21:07:10.134603  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 0
I1002 21:07:10.134752  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa273 a6c01010 'd') m d775 at 0 mt 1759439228 l 4096 t 0 d 0 ext )
I1002 21:07:10.135103  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Twalk tag 0 fid 0 newfid 1 
I1002 21:07:10.135160  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rwalk tag 0 
I1002 21:07:10.135367  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Topen tag 0 fid 1 mode 0
I1002 21:07:10.135440  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Ropen tag 0 qid (20fa273 a6c01010 'd') iounit 0
I1002 21:07:10.135591  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 0
I1002 21:07:10.135723  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa273 a6c01010 'd') m d775 at 0 mt 1759439228 l 4096 t 0 d 0 ext )
I1002 21:07:10.136031  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tread tag 0 fid 1 offset 0 count 262120
I1002 21:07:10.136291  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rread tag 0 count 258
I1002 21:07:10.136496  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tread tag 0 fid 1 offset 258 count 261862
I1002 21:07:10.136539  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rread tag 0 count 0
I1002 21:07:10.136697  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tread tag 0 fid 1 offset 258 count 262120
I1002 21:07:10.136726  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rread tag 0 count 0
I1002 21:07:10.136872  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Twalk tag 0 fid 0 newfid 2 0:'test-1759439228944589422' 
I1002 21:07:10.136921  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rwalk tag 0 (20fa276 a6c01010 '') 
I1002 21:07:10.137057  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.137193  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('test-1759439228944589422' 'jenkins' 'balintp' '' q (20fa276 a6c01010 '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.137392  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.137508  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('test-1759439228944589422' 'jenkins' 'balintp' '' q (20fa276 a6c01010 '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.137668  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tclunk tag 0 fid 2
I1002 21:07:10.137726  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rclunk tag 0
I1002 21:07:10.137940  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1002 21:07:10.137995  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rwalk tag 0 (20fa275 a6c0100f '') 
I1002 21:07:10.138118  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.138211  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa275 a6c0100f '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.138339  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.138415  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa275 a6c0100f '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.138545  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tclunk tag 0 fid 2
I1002 21:07:10.138584  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rclunk tag 0
I1002 21:07:10.138708  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1002 21:07:10.138749  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rwalk tag 0 (20fa274 a6c0100f '') 
I1002 21:07:10.138831  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.138930  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa274 a6c0100f '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.139096  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.139185  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa274 a6c0100f '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.139321  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tclunk tag 0 fid 2
I1002 21:07:10.139350  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rclunk tag 0
I1002 21:07:10.139492  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tread tag 0 fid 1 offset 258 count 262120
I1002 21:07:10.139525  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rread tag 0 count 0
I1002 21:07:10.139677  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tclunk tag 0 fid 1
I1002 21:07:10.139715  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rclunk tag 0
I1002 21:07:10.404280  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Twalk tag 0 fid 0 newfid 1 0:'test-1759439228944589422' 
I1002 21:07:10.404356  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rwalk tag 0 (20fa276 a6c01010 '') 
I1002 21:07:10.404517  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 1
I1002 21:07:10.404691  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('test-1759439228944589422' 'jenkins' 'balintp' '' q (20fa276 a6c01010 '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.404850  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Twalk tag 0 fid 1 newfid 2 
I1002 21:07:10.404882  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rwalk tag 0 
I1002 21:07:10.404987  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Topen tag 0 fid 2 mode 0
I1002 21:07:10.405050  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Ropen tag 0 qid (20fa276 a6c01010 '') iounit 0
I1002 21:07:10.405136  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 1
I1002 21:07:10.405223  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('test-1759439228944589422' 'jenkins' 'balintp' '' q (20fa276 a6c01010 '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.405438  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tread tag 0 fid 2 offset 0 count 24
I1002 21:07:10.405493  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rread tag 0 count 24
I1002 21:07:10.405620  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tclunk tag 0 fid 2
I1002 21:07:10.405651  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rclunk tag 0
I1002 21:07:10.405768  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tclunk tag 0 fid 1
I1002 21:07:10.405797  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rclunk tag 0
I1002 21:07:10.715065  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 0
I1002 21:07:10.715214  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa273 a6c01010 'd') m d775 at 0 mt 1759439228 l 4096 t 0 d 0 ext )
I1002 21:07:10.715620  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Twalk tag 0 fid 0 newfid 1 
I1002 21:07:10.715698  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rwalk tag 0 
I1002 21:07:10.715873  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Topen tag 0 fid 1 mode 0
I1002 21:07:10.715948  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Ropen tag 0 qid (20fa273 a6c01010 'd') iounit 0
I1002 21:07:10.716074  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 0
I1002 21:07:10.716184  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa273 a6c01010 'd') m d775 at 0 mt 1759439228 l 4096 t 0 d 0 ext )
I1002 21:07:10.716442  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tread tag 0 fid 1 offset 0 count 262120
I1002 21:07:10.716604  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rread tag 0 count 258
I1002 21:07:10.716772  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tread tag 0 fid 1 offset 258 count 261862
I1002 21:07:10.716809  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rread tag 0 count 0
I1002 21:07:10.716940  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tread tag 0 fid 1 offset 258 count 262120
I1002 21:07:10.716975  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rread tag 0 count 0
I1002 21:07:10.717097  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Twalk tag 0 fid 0 newfid 2 0:'test-1759439228944589422' 
I1002 21:07:10.717145  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rwalk tag 0 (20fa276 a6c01010 '') 
I1002 21:07:10.717246  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.717358  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('test-1759439228944589422' 'jenkins' 'balintp' '' q (20fa276 a6c01010 '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.717481  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.717575  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('test-1759439228944589422' 'jenkins' 'balintp' '' q (20fa276 a6c01010 '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.717690  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tclunk tag 0 fid 2
I1002 21:07:10.717717  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rclunk tag 0
I1002 21:07:10.717837  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1002 21:07:10.717894  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rwalk tag 0 (20fa275 a6c0100f '') 
I1002 21:07:10.717996  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.718082  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa275 a6c0100f '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.718199  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.718281  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa275 a6c0100f '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.718388  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tclunk tag 0 fid 2
I1002 21:07:10.718414  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rclunk tag 0
I1002 21:07:10.718525  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1002 21:07:10.718562  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rwalk tag 0 (20fa274 a6c0100f '') 
I1002 21:07:10.718658  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.718774  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa274 a6c0100f '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.718907  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tstat tag 0 fid 2
I1002 21:07:10.718993  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa274 a6c0100f '') m 644 at 0 mt 1759439228 l 24 t 0 d 0 ext )
I1002 21:07:10.719107  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tclunk tag 0 fid 2
I1002 21:07:10.719132  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rclunk tag 0
I1002 21:07:10.719253  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tread tag 0 fid 1 offset 258 count 262120
I1002 21:07:10.719297  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rread tag 0 count 0
I1002 21:07:10.719424  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tclunk tag 0 fid 1
I1002 21:07:10.719468  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rclunk tag 0
I1002 21:07:10.720563  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1002 21:07:10.720619  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rerror tag 0 ename 'file not found' ecode 0
I1002 21:07:10.991290  128683 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:53932 Tclunk tag 0 fid 0
I1002 21:07:10.991349  128683 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:53932 Rclunk tag 0
I1002 21:07:10.991717  128683 main.go:125] stdlog: ufs.go:147 disconnected
I1002 21:07:11.007064  128683 out.go:179] * Unmounting /mount-9p ...
I1002 21:07:11.008214  128683 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1002 21:07:11.015748  128683 mount.go:180] unmount for /mount-9p ran successfully
I1002 21:07:11.015893  128683 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/.mount-process: {Name:mke11dea114a74be69ba1d52ec908a584efc2278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1002 21:07:11.017458  128683 out.go:203] 
W1002 21:07:11.018649  128683 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1002 21:07:11.019602  128683 out.go:203] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image load --daemon kicbase/echo-server:functional-012915 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image ls
I1002 21:07:10.113187   84100 retry.go:31] will retry after 4.575392312s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:461: expected "kicbase/echo-server:functional-012915" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-012915
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image load --daemon kicbase/echo-server:functional-012915 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-012915" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image save kicbase/echo-server:functional-012915 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1002 21:07:12.449013  130506 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:07:12.449173  130506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:12.449185  130506 out.go:374] Setting ErrFile to fd 2...
	I1002 21:07:12.449191  130506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:12.449537  130506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:07:12.450401  130506 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:12.450588  130506 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:12.451142  130506 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
	I1002 21:07:12.472068  130506 ssh_runner.go:195] Run: systemctl --version
	I1002 21:07:12.472128  130506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
	I1002 21:07:12.493196  130506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
	I1002 21:07:12.595241  130506 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1002 21:07:12.595312  130506 cache_images.go:254] Failed to load cached images for "functional-012915": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1002 21:07:12.595337  130506 cache_images.go:266] failed pushing to: functional-012915

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-012915
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image save --daemon kicbase/echo-server:functional-012915 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-012915
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-012915: exit status 1 (22.197501ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-012915

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-012915

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (502.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1002 21:12:02.783356   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:02.789829   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:02.801253   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:02.822699   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:02.864123   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:02.945599   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:03.107285   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:03.429002   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:04.071060   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:05.352696   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:07.915565   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:13.037188   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:23.278903   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:12:43.761012   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:13:24.723482   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:14:46.648247   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:17:02.783045   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:17:30.490038   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (8m21.068172073s)

                                                
                                                
-- stdout --
	* [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	* 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	* 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	* 
	I1002 21:19:28.029896  136530 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:11:12.231995655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff7695c206c75496a82b03b4cb8baaa7c43c19b01b7f03f1eecaf27d7d3cea7",
	            "SandboxKey": "/var/run/docker/netns/dff7695c206c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2f:81:cd:1d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "3f06532229560b3fca9b42b36cd7815a76d73449625385a23105f652639bf820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 6 (297.519284ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:19:28.382608  142112 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount2 --alsologtostderr -v=1 │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh            │ functional-012915 ssh sudo cat /etc/ssl/certs/84100.pem                                                           │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh sudo cat /usr/share/ca-certificates/84100.pem                                               │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh sudo cat /etc/ssl/certs/51391683.0                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh findmnt -T /mount1                                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh sudo cat /etc/ssl/certs/841002.pem                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh findmnt -T /mount2                                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh sudo cat /usr/share/ca-certificates/841002.pem                                              │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh findmnt -T /mount3                                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ mount          │ -p functional-012915 --kill=true                                                                                  │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-012915 --alsologtostderr -v=1                                                    │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ ssh            │ functional-012915 ssh sudo cat /etc/test/nested/copy/84100/hosts                                                  │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls --format short --alsologtostderr                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls --format json --alsologtostderr                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls --format table --alsologtostderr                                                       │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls --format yaml --alsologtostderr                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ ssh            │ functional-012915 ssh pgrep buildkitd                                                                             │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                           │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                           │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                           │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                        │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete         │ -p functional-012915                                                                                              │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	│ start          │ ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio   │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:11:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	I1002 21:19:28.029896  136530 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:19:19 ha-798711 crio[783]: time="2025-10-02T21:19:19.227979761Z" level=info msg="createCtr: removing container d9b15c197e578086372ffc6923eb3592c53a42e988e65c85c10a6356550df496" id=fb2c3e27-553e-4626-8677-2989c2ef750a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:19 ha-798711 crio[783]: time="2025-10-02T21:19:19.228010125Z" level=info msg="createCtr: deleting container d9b15c197e578086372ffc6923eb3592c53a42e988e65c85c10a6356550df496 from storage" id=fb2c3e27-553e-4626-8677-2989c2ef750a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:19 ha-798711 crio[783]: time="2025-10-02T21:19:19.229839826Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=fb2c3e27-553e-4626-8677-2989c2ef750a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:20 ha-798711 crio[783]: time="2025-10-02T21:19:20.200911632Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=31629b50-0301-44c8-9fec-78f94a416ade name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:19:20 ha-798711 crio[783]: time="2025-10-02T21:19:20.201797828Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c4905a18-776b-4825-b31d-20d63fea603e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:19:20 ha-798711 crio[783]: time="2025-10-02T21:19:20.202635107Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-798711/kube-apiserver" id=99fde19d-1a35-4bb4-809a-d9bba3645642 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:20 ha-798711 crio[783]: time="2025-10-02T21:19:20.202898569Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:19:20 ha-798711 crio[783]: time="2025-10-02T21:19:20.206306267Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:19:20 ha-798711 crio[783]: time="2025-10-02T21:19:20.20676577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:19:20 ha-798711 crio[783]: time="2025-10-02T21:19:20.223007958Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=99fde19d-1a35-4bb4-809a-d9bba3645642 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:20 ha-798711 crio[783]: time="2025-10-02T21:19:20.224422119Z" level=info msg="createCtr: deleting container ID aee1e152d6be8250197de036a3bfe0de9bca772160605d13c83121f67d481316 from idIndex" id=99fde19d-1a35-4bb4-809a-d9bba3645642 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:20 ha-798711 crio[783]: time="2025-10-02T21:19:20.224462375Z" level=info msg="createCtr: removing container aee1e152d6be8250197de036a3bfe0de9bca772160605d13c83121f67d481316" id=99fde19d-1a35-4bb4-809a-d9bba3645642 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:20 ha-798711 crio[783]: time="2025-10-02T21:19:20.224503791Z" level=info msg="createCtr: deleting container aee1e152d6be8250197de036a3bfe0de9bca772160605d13c83121f67d481316 from storage" id=99fde19d-1a35-4bb4-809a-d9bba3645642 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:20 ha-798711 crio[783]: time="2025-10-02T21:19:20.226676465Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-798711_kube-system_4a40991d7a1715abba4b4bde50171ddc_0" id=99fde19d-1a35-4bb4-809a-d9bba3645642 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:23 ha-798711 crio[783]: time="2025-10-02T21:19:23.200891384Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=1caaf417-cf73-4777-b326-790e3fc96bdc name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:19:23 ha-798711 crio[783]: time="2025-10-02T21:19:23.201797376Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=2595d89c-4d29-4cf1-804f-f0535491b7a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:19:23 ha-798711 crio[783]: time="2025-10-02T21:19:23.202666949Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-798711/kube-controller-manager" id=a8dc4047-92a2-40f4-9a17-d2529e68e324 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:23 ha-798711 crio[783]: time="2025-10-02T21:19:23.202902797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:19:23 ha-798711 crio[783]: time="2025-10-02T21:19:23.206265047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:19:23 ha-798711 crio[783]: time="2025-10-02T21:19:23.206681003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:19:23 ha-798711 crio[783]: time="2025-10-02T21:19:23.219842622Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a8dc4047-92a2-40f4-9a17-d2529e68e324 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:23 ha-798711 crio[783]: time="2025-10-02T21:19:23.221241886Z" level=info msg="createCtr: deleting container ID 2ad648d3c94aa8d86df1643dd9374a05d6abf56d3dd7763a118344745dd6dfa9 from idIndex" id=a8dc4047-92a2-40f4-9a17-d2529e68e324 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:23 ha-798711 crio[783]: time="2025-10-02T21:19:23.221278479Z" level=info msg="createCtr: removing container 2ad648d3c94aa8d86df1643dd9374a05d6abf56d3dd7763a118344745dd6dfa9" id=a8dc4047-92a2-40f4-9a17-d2529e68e324 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:23 ha-798711 crio[783]: time="2025-10-02T21:19:23.221316992Z" level=info msg="createCtr: deleting container 2ad648d3c94aa8d86df1643dd9374a05d6abf56d3dd7763a118344745dd6dfa9 from storage" id=a8dc4047-92a2-40f4-9a17-d2529e68e324 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:19:23 ha-798711 crio[783]: time="2025-10-02T21:19:23.223233594Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=a8dc4047-92a2-40f4-9a17-d2529e68e324 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:28.957105    2721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:28.957683    2721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:28.959176    2721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:28.959617    2721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:28.961165    2721 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:19:28 up  3:01,  0 user,  load average: 0.02, 0.08, 0.15
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:19:19 ha-798711 kubelet[1962]: E1002 21:19:19.230215    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:19:19 ha-798711 kubelet[1962]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:19:19 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:19:19 ha-798711 kubelet[1962]: E1002 21:19:19.230245    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:19:20 ha-798711 kubelet[1962]: E1002 21:19:20.200461    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:19:20 ha-798711 kubelet[1962]: E1002 21:19:20.227000    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:19:20 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:19:20 ha-798711 kubelet[1962]:  > podSandboxID="809957a7718c537a272955808ab83d0d209917c15901f264880b1842ca38ceb3"
	Oct 02 21:19:20 ha-798711 kubelet[1962]: E1002 21:19:20.227107    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:19:20 ha-798711 kubelet[1962]:         container kube-apiserver start failed in pod kube-apiserver-ha-798711_kube-system(4a40991d7a1715abba4b4bde50171ddc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:19:20 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:19:20 ha-798711 kubelet[1962]: E1002 21:19:20.227141    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-798711" podUID="4a40991d7a1715abba4b4bde50171ddc"
	Oct 02 21:19:23 ha-798711 kubelet[1962]: E1002 21:19:23.200464    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:19:23 ha-798711 kubelet[1962]: E1002 21:19:23.223554    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:19:23 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:19:23 ha-798711 kubelet[1962]:  > podSandboxID="76c61fa26c511dcbbaf5f791824244f525f21034929271894f96b97be53d12e4"
	Oct 02 21:19:23 ha-798711 kubelet[1962]: E1002 21:19:23.223651    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:19:23 ha-798711 kubelet[1962]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:19:23 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:19:23 ha-798711 kubelet[1962]: E1002 21:19:23.223685    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	Oct 02 21:19:23 ha-798711 kubelet[1962]: E1002 21:19:23.499409    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c270fef5b  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-798711 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.193223003 +0000 UTC m=+1.090766327,LastTimestamp:2025-10-02 21:15:27.193223003 +0000 UTC m=+1.090766327,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:19:23 ha-798711 kubelet[1962]: E1002 21:19:23.824368    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:19:23 ha-798711 kubelet[1962]: I1002 21:19:23.983822    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:19:23 ha-798711 kubelet[1962]: E1002 21:19:23.984243    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:19:27 ha-798711 kubelet[1962]: E1002 21:19:27.214666    1962 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-798711\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 6 (302.261781ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:19:29.342392  142436 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (502.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (114.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (93.438066ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-798711" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- rollout status deployment/busybox: exit status 1 (88.68463ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (89.444386ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 21:19:29.629200   84100 retry.go:31] will retry after 727.4093ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (89.880236ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 21:19:30.447146   84100 retry.go:31] will retry after 1.296756284s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (90.023073ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 21:19:31.834804   84100 retry.go:31] will retry after 1.155152835s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (94.002393ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 21:19:33.084206   84100 retry.go:31] will retry after 4.605348232s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (91.476787ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 21:19:37.785619   84100 retry.go:31] will retry after 5.519744799s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (91.96425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 21:19:43.397653   84100 retry.go:31] will retry after 9.606121882s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (91.692063ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 21:19:53.096261   84100 retry.go:31] will retry after 16.421507432s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.353123ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 21:20:09.615438   84100 retry.go:31] will retry after 24.343855212s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (90.965227ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 21:20:34.051638   84100 retry.go:31] will retry after 28.708395851s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.932308ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1002 21:21:02.863171   84100 retry.go:31] will retry after 19.461680075s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (92.02746ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (90.195641ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- exec  -- nslookup kubernetes.io: exit status 1 (93.564075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- exec  -- nslookup kubernetes.default: exit status 1 (90.499381ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (91.953289ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:11:12.231995655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff7695c206c75496a82b03b4cb8baaa7c43c19b01b7f03f1eecaf27d7d3cea7",
	            "SandboxKey": "/var/run/docker/netns/dff7695c206c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2f:81:cd:1d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "3f06532229560b3fca9b42b36cd7815a76d73449625385a23105f652639bf820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 6 (291.637082ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:23.087851  143518 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-012915 ssh pgrep buildkitd                                                                           │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │                     │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete         │ -p functional-012915                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	│ start          │ ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- rollout status deployment/busybox                                                          │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:11:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	I1002 21:19:28.029896  136530 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.224853914Z" level=info msg="createCtr: removing container a0b039e7382073517839d62f84b1d7bdddc00a41c8d9ef7110dd1546a9ef6d71" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.224896558Z" level=info msg="createCtr: deleting container a0b039e7382073517839d62f84b1d7bdddc00a41c8d9ef7110dd1546a9ef6d71 from storage" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.227165671Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.202267878Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4078c428-1413-4c71-9631-402893c5a2dd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.203230958Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b7d07987-8e24-40b0-aab0-1f5a40695194 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.204195061Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-798711/kube-controller-manager" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.204394394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.207757566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.20814543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.225908525Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227342354Z" level=info msg="createCtr: deleting container ID bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc from idIndex" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227381297Z" level=info msg="createCtr: removing container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227416582Z" level=info msg="createCtr: deleting container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc from storage" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.229650508Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.201348085Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c2fe81ca-3381-4422-bd6a-02e61e8efe1c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.202348381Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=eb12ffd1-208d-4fc2-9e76-5458df25d67a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203292175Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203537082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.206897734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.207314627Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.220275497Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221794152Z" level=info msg="createCtr: deleting container ID aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from idIndex" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221833843Z" level=info msg="createCtr: removing container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221874973Z" level=info msg="createCtr: deleting container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from storage" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.224164779Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:21:23.668715    3095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:23.669287    3095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:23.670823    3095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:23.671282    3095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:23.672825    3095 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:21:23 up  3:03,  0 user,  load average: 0.05, 0.06, 0.14
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:21:17 ha-798711 kubelet[1962]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:17 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:17 ha-798711 kubelet[1962]: E1002 21:21:17.227652    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.200617    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.229960    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:18 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:18 ha-798711 kubelet[1962]:  > podSandboxID="76c61fa26c511dcbbaf5f791824244f525f21034929271894f96b97be53d12e4"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.230055    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:18 ha-798711 kubelet[1962]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:18 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.230084    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.903291    1962 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 21:21:21 ha-798711 kubelet[1962]: E1002 21:21:21.107731    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c27101d16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,LastTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.200810    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224532    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > podSandboxID="55af7e8787f2a5119f69d0eccdb6fb36e84f93e4a4a878ed95b1aed61e1818f5"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224634    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224666    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.842090    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: I1002 21:21:23.020527    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.020864    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.449847    1962 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 6 (296.785579ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:24.048050  143837 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (114.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (90.447087ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-798711"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:11:12.231995655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff7695c206c75496a82b03b4cb8baaa7c43c19b01b7f03f1eecaf27d7d3cea7",
	            "SandboxKey": "/var/run/docker/netns/dff7695c206c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2f:81:cd:1d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "3f06532229560b3fca9b42b36cd7815a76d73449625385a23105f652639bf820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 6 (292.31997ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:24.450604  143983 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete         │ -p functional-012915                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	│ start          │ ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- rollout status deployment/busybox                                                          │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:11:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	I1002 21:19:28.029896  136530 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.224853914Z" level=info msg="createCtr: removing container a0b039e7382073517839d62f84b1d7bdddc00a41c8d9ef7110dd1546a9ef6d71" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.224896558Z" level=info msg="createCtr: deleting container a0b039e7382073517839d62f84b1d7bdddc00a41c8d9ef7110dd1546a9ef6d71 from storage" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.227165671Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.202267878Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4078c428-1413-4c71-9631-402893c5a2dd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.203230958Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b7d07987-8e24-40b0-aab0-1f5a40695194 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.204195061Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-798711/kube-controller-manager" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.204394394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.207757566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.20814543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.225908525Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227342354Z" level=info msg="createCtr: deleting container ID bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc from idIndex" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227381297Z" level=info msg="createCtr: removing container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227416582Z" level=info msg="createCtr: deleting container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc from storage" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.229650508Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.201348085Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c2fe81ca-3381-4422-bd6a-02e61e8efe1c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.202348381Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=eb12ffd1-208d-4fc2-9e76-5458df25d67a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203292175Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203537082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.206897734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.207314627Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.220275497Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221794152Z" level=info msg="createCtr: deleting container ID aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from idIndex" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221833843Z" level=info msg="createCtr: removing container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221874973Z" level=info msg="createCtr: deleting container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from storage" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.224164779Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:21:25.027366    3251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:25.027962    3251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:25.029618    3251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:25.030053    3251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:25.031563    3251 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:21:25 up  3:03,  0 user,  load average: 0.05, 0.06, 0.14
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:21:17 ha-798711 kubelet[1962]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:17 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:17 ha-798711 kubelet[1962]: E1002 21:21:17.227652    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.200617    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.229960    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:18 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:18 ha-798711 kubelet[1962]:  > podSandboxID="76c61fa26c511dcbbaf5f791824244f525f21034929271894f96b97be53d12e4"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.230055    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:18 ha-798711 kubelet[1962]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:18 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.230084    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.903291    1962 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 21:21:21 ha-798711 kubelet[1962]: E1002 21:21:21.107731    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c27101d16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,LastTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.200810    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224532    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > podSandboxID="55af7e8787f2a5119f69d0eccdb6fb36e84f93e4a4a878ed95b1aed61e1818f5"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224634    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224666    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.842090    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: I1002 21:21:23.020527    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.020864    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.449847    1962 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 6 (295.835746ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:25.400244  144324 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (1.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (1.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 node add --alsologtostderr -v 5: exit status 103 (248.939532ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-798711 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-798711"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:25.459365  144435 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:25.459678  144435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:25.459689  144435 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:25.459693  144435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:25.459882  144435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:25.460140  144435 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:25.460464  144435 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:25.460855  144435 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:21:25.478675  144435 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:25.479051  144435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:21:25.533555  144435 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:21:25.522611568 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:21:25.533666  144435 api_server.go:166] Checking apiserver status ...
	I1002 21:21:25.533707  144435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:21:25.533759  144435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:21:25.551546  144435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	W1002 21:21:25.655835  144435 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:21:25.657973  144435 out.go:179] * The control-plane node ha-798711 apiserver is not running: (state=Stopped)
	I1002 21:21:25.659591  144435 out.go:179]   To start a cluster, run: "minikube start -p ha-798711"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-798711 node add --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:11:12.231995655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff7695c206c75496a82b03b4cb8baaa7c43c19b01b7f03f1eecaf27d7d3cea7",
	            "SandboxKey": "/var/run/docker/netns/dff7695c206c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2f:81:cd:1d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "3f06532229560b3fca9b42b36cd7815a76d73449625385a23105f652639bf820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 6 (290.463341ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:25.959828  144540 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete         │ -p functional-012915                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	│ start          │ ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- rollout status deployment/busybox                                                          │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node add --alsologtostderr -v 5                                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:11:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	I1002 21:19:28.029896  136530 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.224853914Z" level=info msg="createCtr: removing container a0b039e7382073517839d62f84b1d7bdddc00a41c8d9ef7110dd1546a9ef6d71" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.224896558Z" level=info msg="createCtr: deleting container a0b039e7382073517839d62f84b1d7bdddc00a41c8d9ef7110dd1546a9ef6d71 from storage" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.227165671Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.202267878Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4078c428-1413-4c71-9631-402893c5a2dd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.203230958Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b7d07987-8e24-40b0-aab0-1f5a40695194 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.204195061Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-798711/kube-controller-manager" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.204394394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.207757566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.20814543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.225908525Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227342354Z" level=info msg="createCtr: deleting container ID bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc from idIndex" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227381297Z" level=info msg="createCtr: removing container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227416582Z" level=info msg="createCtr: deleting container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc from storage" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.229650508Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.201348085Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c2fe81ca-3381-4422-bd6a-02e61e8efe1c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.202348381Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=eb12ffd1-208d-4fc2-9e76-5458df25d67a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203292175Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203537082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.206897734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.207314627Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.220275497Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221794152Z" level=info msg="createCtr: deleting container ID aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from idIndex" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221833843Z" level=info msg="createCtr: removing container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221874973Z" level=info msg="createCtr: deleting container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from storage" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.224164779Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:21:26.538456    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:26.538927    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:26.540525    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:26.540954    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:26.542440    3420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:21:26 up  3:03,  0 user,  load average: 0.05, 0.06, 0.14
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:21:17 ha-798711 kubelet[1962]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:17 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:17 ha-798711 kubelet[1962]: E1002 21:21:17.227652    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.200617    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.229960    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:18 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:18 ha-798711 kubelet[1962]:  > podSandboxID="76c61fa26c511dcbbaf5f791824244f525f21034929271894f96b97be53d12e4"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.230055    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:18 ha-798711 kubelet[1962]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:18 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.230084    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.903291    1962 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 21:21:21 ha-798711 kubelet[1962]: E1002 21:21:21.107731    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c27101d16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,LastTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.200810    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224532    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > podSandboxID="55af7e8787f2a5119f69d0eccdb6fb36e84f93e4a4a878ed95b1aed61e1818f5"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224634    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224666    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.842090    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: I1002 21:21:23.020527    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.020864    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.449847    1962 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 6 (298.474261ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:26.921494  144863 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (1.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (1.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-798711 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-798711 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (46.463555ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-798711

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-798711 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-798711 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:11:12.231995655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff7695c206c75496a82b03b4cb8baaa7c43c19b01b7f03f1eecaf27d7d3cea7",
	            "SandboxKey": "/var/run/docker/netns/dff7695c206c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2f:81:cd:1d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "3f06532229560b3fca9b42b36cd7815a76d73449625385a23105f652639bf820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 6 (290.009973ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:27.276618  144995 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete         │ -p functional-012915                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	│ start          │ ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- rollout status deployment/busybox                                                          │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node add --alsologtostderr -v 5                                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:11:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	I1002 21:19:28.029896  136530 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.224853914Z" level=info msg="createCtr: removing container a0b039e7382073517839d62f84b1d7bdddc00a41c8d9ef7110dd1546a9ef6d71" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.224896558Z" level=info msg="createCtr: deleting container a0b039e7382073517839d62f84b1d7bdddc00a41c8d9ef7110dd1546a9ef6d71 from storage" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:17 ha-798711 crio[783]: time="2025-10-02T21:21:17.227165671Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=a85b4c17-95d2-4aa8-9a95-1ebc8c73798e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.202267878Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=4078c428-1413-4c71-9631-402893c5a2dd name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.203230958Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=b7d07987-8e24-40b0-aab0-1f5a40695194 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.204195061Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-798711/kube-controller-manager" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.204394394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.207757566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.20814543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.225908525Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227342354Z" level=info msg="createCtr: deleting container ID bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc from idIndex" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227381297Z" level=info msg="createCtr: removing container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227416582Z" level=info msg="createCtr: deleting container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc from storage" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.229650508Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.201348085Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c2fe81ca-3381-4422-bd6a-02e61e8efe1c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.202348381Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=eb12ffd1-208d-4fc2-9e76-5458df25d67a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203292175Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203537082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.206897734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.207314627Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.220275497Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221794152Z" level=info msg="createCtr: deleting container ID aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from idIndex" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221833843Z" level=info msg="createCtr: removing container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221874973Z" level=info msg="createCtr: deleting container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from storage" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.224164779Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:21:27.854152    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:27.854609    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:27.856185    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:27.856862    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:27.857835    3576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:21:27 up  3:03,  0 user,  load average: 0.13, 0.08, 0.14
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:21:17 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:17 ha-798711 kubelet[1962]: E1002 21:21:17.227652    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.200617    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.229960    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:18 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:18 ha-798711 kubelet[1962]:  > podSandboxID="76c61fa26c511dcbbaf5f791824244f525f21034929271894f96b97be53d12e4"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.230055    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:18 ha-798711 kubelet[1962]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:18 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.230084    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.903291    1962 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 21:21:21 ha-798711 kubelet[1962]: E1002 21:21:21.107731    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c27101d16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,LastTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.200810    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224532    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > podSandboxID="55af7e8787f2a5119f69d0eccdb6fb36e84f93e4a4a878ed95b1aed61e1818f5"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224634    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224666    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.842090    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: I1002 21:21:23.020527    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.020864    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.449847    1962 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 21:21:27 ha-798711 kubelet[1962]: E1002 21:21:27.223255    1962 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-798711\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 6 (304.954329ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:28.237184  145322 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (1.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-798711" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-798711\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-798711\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-798711\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-798711" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-798711\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-798711\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-798711\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:11:12.231995655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff7695c206c75496a82b03b4cb8baaa7c43c19b01b7f03f1eecaf27d7d3cea7",
	            "SandboxKey": "/var/run/docker/netns/dff7695c206c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2f:81:cd:1d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "3f06532229560b3fca9b42b36cd7815a76d73449625385a23105f652639bf820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 6 (297.62148ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:28.865921  145575 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete         │ -p functional-012915                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	│ start          │ ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- rollout status deployment/busybox                                                          │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node add --alsologtostderr -v 5                                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:11:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	I1002 21:19:28.029896  136530 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227381297Z" level=info msg="createCtr: removing container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227416582Z" level=info msg="createCtr: deleting container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc from storage" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.229650508Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.201348085Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c2fe81ca-3381-4422-bd6a-02e61e8efe1c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.202348381Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=eb12ffd1-208d-4fc2-9e76-5458df25d67a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203292175Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203537082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.206897734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.207314627Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.220275497Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221794152Z" level=info msg="createCtr: deleting container ID aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from idIndex" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221833843Z" level=info msg="createCtr: removing container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221874973Z" level=info msg="createCtr: deleting container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from storage" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.224164779Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.200595352Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d848c156-33ce-46f7-8e6e-29fbdaf70013 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.201658612Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=faa3bc6f-b367-422b-b82c-43026d497dcf name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.202647969Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-798711/kube-scheduler" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.203316168Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.207907896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.208349823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.22553271Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.226941251Z" level=info msg="createCtr: deleting container ID e81130c72e31de2135d35b58019329dc05a0077f0ff0978de60fbc36ae0dbe47 from idIndex" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.226994647Z" level=info msg="createCtr: removing container e81130c72e31de2135d35b58019329dc05a0077f0ff0978de60fbc36ae0dbe47" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.227039027Z" level=info msg="createCtr: deleting container e81130c72e31de2135d35b58019329dc05a0077f0ff0978de60fbc36ae0dbe47 from storage" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.229654881Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:21:29.447804    3758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:29.448351    3758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:29.449943    3758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:29.450383    3758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:29.451911    3758 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:21:29 up  3:03,  0 user,  load average: 0.13, 0.08, 0.14
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:21:18 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.230084    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	Oct 02 21:21:18 ha-798711 kubelet[1962]: E1002 21:21:18.903291    1962 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://192.168.49.2:8443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError"
	Oct 02 21:21:21 ha-798711 kubelet[1962]: E1002 21:21:21.107731    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c27101d16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,LastTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.200810    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224532    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > podSandboxID="55af7e8787f2a5119f69d0eccdb6fb36e84f93e4a4a878ed95b1aed61e1818f5"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224634    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224666    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.842090    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: I1002 21:21:23.020527    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.020864    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.449847    1962 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 21:21:27 ha-798711 kubelet[1962]: E1002 21:21:27.223255    1962 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-798711\" not found"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.200069    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.229981    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:28 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:28 ha-798711 kubelet[1962]:  > podSandboxID="29268766c938de77a88251d1f04eca5dd36f8e164ff499f61eaf1fca7ad11042"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.230113    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:28 ha-798711 kubelet[1962]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:28 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.230157    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 6 (297.542091ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:29.830442  145903 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --output json --alsologtostderr -v 5: exit status 6 (287.670296ms)

                                                
                                                
-- stdout --
	{"Name":"ha-798711","Host":"Running","Kubelet":"Running","APIServer":"Stopped","Kubeconfig":"Misconfigured","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:29.889053  146014 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:29.889296  146014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:29.889305  146014 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:29.889309  146014 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:29.889514  146014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:29.889688  146014 out.go:368] Setting JSON to true
	I1002 21:21:29.889716  146014 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:29.889784  146014 notify.go:220] Checking for updates...
	I1002 21:21:29.890103  146014 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:29.890121  146014 status.go:174] checking status of ha-798711 ...
	I1002 21:21:29.890649  146014 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:21:29.909338  146014 status.go:371] ha-798711 host status = "Running" (err=<nil>)
	I1002 21:21:29.909359  146014 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:29.909620  146014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:21:29.926952  146014 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:29.927262  146014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:21:29.927311  146014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:21:29.945197  146014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:21:30.045361  146014 ssh_runner.go:195] Run: systemctl --version
	I1002 21:21:30.051928  146014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:21:30.064161  146014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:21:30.117604  146014 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:21:30.108198794 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 21:21:30.118059  146014 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:21:30.118093  146014 api_server.go:166] Checking apiserver status ...
	I1002 21:21:30.118135  146014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 21:21:30.128892  146014 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:21:30.128919  146014 status.go:463] ha-798711 apiserver status = Running (err=<nil>)
	I1002 21:21:30.128934  146014 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-798711 status --output json --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:11:12.231995655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff7695c206c75496a82b03b4cb8baaa7c43c19b01b7f03f1eecaf27d7d3cea7",
	            "SandboxKey": "/var/run/docker/netns/dff7695c206c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2f:81:cd:1d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "3f06532229560b3fca9b42b36cd7815a76d73449625385a23105f652639bf820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 6 (292.565047ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:30.431302  146150 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete         │ -p functional-012915                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	│ start          │ ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- rollout status deployment/busybox                                                          │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node add --alsologtostderr -v 5                                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:11:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	I1002 21:19:28.029896  136530 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227381297Z" level=info msg="createCtr: removing container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.227416582Z" level=info msg="createCtr: deleting container bb13cc4b1ce186d4edb37bbd775797ac8a0ee7d29694e9c79b97f309a48867cc from storage" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:18 ha-798711 crio[783]: time="2025-10-02T21:21:18.229650508Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=fc8c0246-edc4-4931-a269-6c23335bef1b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.201348085Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=c2fe81ca-3381-4422-bd6a-02e61e8efe1c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.202348381Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=eb12ffd1-208d-4fc2-9e76-5458df25d67a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203292175Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.203537082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.206897734Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.207314627Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.220275497Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221794152Z" level=info msg="createCtr: deleting container ID aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from idIndex" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221833843Z" level=info msg="createCtr: removing container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221874973Z" level=info msg="createCtr: deleting container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from storage" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.224164779Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.200595352Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d848c156-33ce-46f7-8e6e-29fbdaf70013 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.201658612Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=faa3bc6f-b367-422b-b82c-43026d497dcf name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.202647969Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-798711/kube-scheduler" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.203316168Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.207907896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.208349823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.22553271Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.226941251Z" level=info msg="createCtr: deleting container ID e81130c72e31de2135d35b58019329dc05a0077f0ff0978de60fbc36ae0dbe47 from idIndex" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.226994647Z" level=info msg="createCtr: removing container e81130c72e31de2135d35b58019329dc05a0077f0ff0978de60fbc36ae0dbe47" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.227039027Z" level=info msg="createCtr: deleting container e81130c72e31de2135d35b58019329dc05a0077f0ff0978de60fbc36ae0dbe47 from storage" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.229654881Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:21:31.006597    3932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:31.007217    3932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:31.008857    3932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:31.009340    3932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:31.010866    3932 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:21:31 up  3:03,  0 user,  load average: 0.13, 0.08, 0.14
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:21:21 ha-798711 kubelet[1962]: E1002 21:21:21.107731    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c27101d16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,LastTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.200810    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224532    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > podSandboxID="55af7e8787f2a5119f69d0eccdb6fb36e84f93e4a4a878ed95b1aed61e1818f5"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224634    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:22 ha-798711 kubelet[1962]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:22 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.224666    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.842090    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: I1002 21:21:23.020527    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.020864    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.449847    1962 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 21:21:27 ha-798711 kubelet[1962]: E1002 21:21:27.223255    1962 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-798711\" not found"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.200069    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.229981    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:28 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:28 ha-798711 kubelet[1962]:  > podSandboxID="29268766c938de77a88251d1f04eca5dd36f8e164ff499f61eaf1fca7ad11042"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.230113    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:28 ha-798711 kubelet[1962]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:28 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.230157    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:21:29 ha-798711 kubelet[1962]: E1002 21:21:29.843246    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:21:30 ha-798711 kubelet[1962]: I1002 21:21:30.022973    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:21:30 ha-798711 kubelet[1962]: E1002 21:21:30.023427    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 6 (299.700436ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:31.389128  146475 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 node stop m02 --alsologtostderr -v 5: exit status 85 (58.758397ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:31.447343  146589 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:31.447629  146589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:31.447639  146589 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:31.447644  146589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:31.447863  146589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:31.448109  146589 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:31.448439  146589 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:31.450757  146589 out.go:203] 
	W1002 21:21:31.452002  146589 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1002 21:21:31.452020  146589 out.go:285] * 
	* 
	W1002 21:21:31.457015  146589 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:21:31.458274  146589 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-798711 node stop m02 --alsologtostderr -v 5": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5: exit status 6 (291.412118ms)

                                                
                                                
-- stdout --
	ha-798711
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:31.505388  146600 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:31.505659  146600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:31.505669  146600 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:31.505675  146600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:31.505896  146600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:31.506086  146600 out.go:368] Setting JSON to false
	I1002 21:21:31.506121  146600 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:31.506248  146600 notify.go:220] Checking for updates...
	I1002 21:21:31.506538  146600 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:31.506556  146600 status.go:174] checking status of ha-798711 ...
	I1002 21:21:31.507130  146600 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:21:31.524980  146600 status.go:371] ha-798711 host status = "Running" (err=<nil>)
	I1002 21:21:31.525010  146600 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:31.525351  146600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:21:31.546333  146600 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:31.546585  146600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:21:31.546619  146600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:21:31.564921  146600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:21:31.665299  146600 ssh_runner.go:195] Run: systemctl --version
	I1002 21:21:31.671477  146600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:21:31.684063  146600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:21:31.739109  146600 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:21:31.729678103 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 21:21:31.739538  146600 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:21:31.739566  146600 api_server.go:166] Checking apiserver status ...
	I1002 21:21:31.739601  146600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 21:21:31.750138  146600 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:21:31.750162  146600 status.go:463] ha-798711 apiserver status = Running (err=<nil>)
	I1002 21:21:31.750189  146600 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:11:12.231995655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff7695c206c75496a82b03b4cb8baaa7c43c19b01b7f03f1eecaf27d7d3cea7",
	            "SandboxKey": "/var/run/docker/netns/dff7695c206c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2f:81:cd:1d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "3f06532229560b3fca9b42b36cd7815a76d73449625385a23105f652639bf820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 6 (293.249511ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:32.051694  146722 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete         │ -p functional-012915                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	│ start          │ ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- rollout status deployment/busybox                                                          │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node add --alsologtostderr -v 5                                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node stop m02 --alsologtostderr -v 5                                                                  │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:11:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	I1002 21:19:28.029896  136530 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221833843Z" level=info msg="createCtr: removing container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.221874973Z" level=info msg="createCtr: deleting container aaaa0bea9c7c2e42debf54b9a7bd50d0d1654c5f9c1f56cdae8a875a72b76239 from storage" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:22 ha-798711 crio[783]: time="2025-10-02T21:21:22.224164779Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=79aa669d-ef75-48de-b432-30c4f5c5c685 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.200595352Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d848c156-33ce-46f7-8e6e-29fbdaf70013 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.201658612Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=faa3bc6f-b367-422b-b82c-43026d497dcf name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.202647969Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-798711/kube-scheduler" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.203316168Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.207907896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.208349823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.22553271Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.226941251Z" level=info msg="createCtr: deleting container ID e81130c72e31de2135d35b58019329dc05a0077f0ff0978de60fbc36ae0dbe47 from idIndex" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.226994647Z" level=info msg="createCtr: removing container e81130c72e31de2135d35b58019329dc05a0077f0ff0978de60fbc36ae0dbe47" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.227039027Z" level=info msg="createCtr: deleting container e81130c72e31de2135d35b58019329dc05a0077f0ff0978de60fbc36ae0dbe47 from storage" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.229654881Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.201089546Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5e72deda-ace2-4a89-af26-2c05b1e13c4e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.202143745Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=beaef750-3ead-4c9f-9995-7df9d4494893 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.203179849Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-798711/kube-apiserver" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.203395655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.207566641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.208144101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.222693187Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.224095207Z" level=info msg="createCtr: deleting container ID 798d2813814bc5e821f4ebdc6f0e042ad3ce3fcb642ed53e9eca5c8f5b964a13 from idIndex" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.224131745Z" level=info msg="createCtr: removing container 798d2813814bc5e821f4ebdc6f0e042ad3ce3fcb642ed53e9eca5c8f5b964a13" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.224161838Z" level=info msg="createCtr: deleting container 798d2813814bc5e821f4ebdc6f0e042ad3ce3fcb642ed53e9eca5c8f5b964a13 from storage" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.226036803Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-798711_kube-system_4a40991d7a1715abba4b4bde50171ddc_0" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:21:32.623767    4107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:32.624334    4107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:32.625932    4107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:32.626395    4107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:32.627971    4107 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:21:32 up  3:03,  0 user,  load average: 0.28, 0.11, 0.15
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:21:22 ha-798711 kubelet[1962]: E1002 21:21:22.842090    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: I1002 21:21:23.020527    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.020864    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:21:23 ha-798711 kubelet[1962]: E1002 21:21:23.449847    1962 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 02 21:21:27 ha-798711 kubelet[1962]: E1002 21:21:27.223255    1962 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-798711\" not found"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.200069    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.229981    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:28 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:28 ha-798711 kubelet[1962]:  > podSandboxID="29268766c938de77a88251d1f04eca5dd36f8e164ff499f61eaf1fca7ad11042"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.230113    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:28 ha-798711 kubelet[1962]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:28 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.230157    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:21:29 ha-798711 kubelet[1962]: E1002 21:21:29.843246    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:21:30 ha-798711 kubelet[1962]: I1002 21:21:30.022973    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:21:30 ha-798711 kubelet[1962]: E1002 21:21:30.023427    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:21:31 ha-798711 kubelet[1962]: E1002 21:21:31.109326    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c27101d16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,LastTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:21:31 ha-798711 kubelet[1962]: E1002 21:21:31.200591    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:31 ha-798711 kubelet[1962]: E1002 21:21:31.226349    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:31 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:31 ha-798711 kubelet[1962]:  > podSandboxID="809957a7718c537a272955808ab83d0d209917c15901f264880b1842ca38ceb3"
	Oct 02 21:21:31 ha-798711 kubelet[1962]: E1002 21:21:31.226496    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:31 ha-798711 kubelet[1962]:         container kube-apiserver start failed in pod kube-apiserver-ha-798711_kube-system(4a40991d7a1715abba4b4bde50171ddc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:31 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:31 ha-798711 kubelet[1962]: E1002 21:21:31.226540    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-798711" podUID="4a40991d7a1715abba4b4bde50171ddc"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 6 (302.901825ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:33.005791  147045 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-798711" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-798711\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-798711\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-798711\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:11:12.231995655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff7695c206c75496a82b03b4cb8baaa7c43c19b01b7f03f1eecaf27d7d3cea7",
	            "SandboxKey": "/var/run/docker/netns/dff7695c206c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2f:81:cd:1d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "3f06532229560b3fca9b42b36cd7815a76d73449625385a23105f652639bf820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 6 (292.012117ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:33.629696  147296 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr          │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete         │ -p functional-012915                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	│ start          │ ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- rollout status deployment/busybox                                                          │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node add --alsologtostderr -v 5                                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node stop m02 --alsologtostderr -v 5                                                                  │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:11:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	I1002 21:19:28.029896  136530 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.226994647Z" level=info msg="createCtr: removing container e81130c72e31de2135d35b58019329dc05a0077f0ff0978de60fbc36ae0dbe47" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.227039027Z" level=info msg="createCtr: deleting container e81130c72e31de2135d35b58019329dc05a0077f0ff0978de60fbc36ae0dbe47 from storage" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:28 ha-798711 crio[783]: time="2025-10-02T21:21:28.229654881Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=fb113702-cc7a-47ea-a003-e01bb44ae831 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.201089546Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5e72deda-ace2-4a89-af26-2c05b1e13c4e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.202143745Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=beaef750-3ead-4c9f-9995-7df9d4494893 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.203179849Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-798711/kube-apiserver" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.203395655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.207566641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.208144101Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.222693187Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.224095207Z" level=info msg="createCtr: deleting container ID 798d2813814bc5e821f4ebdc6f0e042ad3ce3fcb642ed53e9eca5c8f5b964a13 from idIndex" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.224131745Z" level=info msg="createCtr: removing container 798d2813814bc5e821f4ebdc6f0e042ad3ce3fcb642ed53e9eca5c8f5b964a13" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.224161838Z" level=info msg="createCtr: deleting container 798d2813814bc5e821f4ebdc6f0e042ad3ce3fcb642ed53e9eca5c8f5b964a13 from storage" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:31 ha-798711 crio[783]: time="2025-10-02T21:21:31.226036803Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-798711_kube-system_4a40991d7a1715abba4b4bde50171ddc_0" id=61f3b002-b191-4355-a53d-dedc8f986f3f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:33 ha-798711 crio[783]: time="2025-10-02T21:21:33.201215358Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=3cb11c85-59d4-4d2a-8c6c-676667ea99a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:33 ha-798711 crio[783]: time="2025-10-02T21:21:33.202145818Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=59775eb7-04da-445f-bde7-fc520bb61f70 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:21:33 ha-798711 crio[783]: time="2025-10-02T21:21:33.202951013Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-798711/kube-controller-manager" id=56bb3d18-c656-4158-aa12-d48015536251 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:33 ha-798711 crio[783]: time="2025-10-02T21:21:33.203209813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:33 ha-798711 crio[783]: time="2025-10-02T21:21:33.20640077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:33 ha-798711 crio[783]: time="2025-10-02T21:21:33.206834446Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:21:33 ha-798711 crio[783]: time="2025-10-02T21:21:33.223706115Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=56bb3d18-c656-4158-aa12-d48015536251 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:33 ha-798711 crio[783]: time="2025-10-02T21:21:33.225088177Z" level=info msg="createCtr: deleting container ID d597ba948c8665d3c909c815bf0100e365629b65535fabf0894d689677f70a92 from idIndex" id=56bb3d18-c656-4158-aa12-d48015536251 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:33 ha-798711 crio[783]: time="2025-10-02T21:21:33.225127191Z" level=info msg="createCtr: removing container d597ba948c8665d3c909c815bf0100e365629b65535fabf0894d689677f70a92" id=56bb3d18-c656-4158-aa12-d48015536251 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:33 ha-798711 crio[783]: time="2025-10-02T21:21:33.225157038Z" level=info msg="createCtr: deleting container d597ba948c8665d3c909c815bf0100e365629b65535fabf0894d689677f70a92 from storage" id=56bb3d18-c656-4158-aa12-d48015536251 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:33 ha-798711 crio[783]: time="2025-10-02T21:21:33.227180826Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=56bb3d18-c656-4158-aa12-d48015536251 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:21:34.212796    4286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:34.213349    4286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:34.215088    4286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:34.215546    4286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:21:34.217117    4286 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:21:34 up  3:03,  0 user,  load average: 0.28, 0.11, 0.15
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:21:28 ha-798711 kubelet[1962]:  > podSandboxID="29268766c938de77a88251d1f04eca5dd36f8e164ff499f61eaf1fca7ad11042"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.230113    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:28 ha-798711 kubelet[1962]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:28 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:28 ha-798711 kubelet[1962]: E1002 21:21:28.230157    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:21:29 ha-798711 kubelet[1962]: E1002 21:21:29.843246    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:21:30 ha-798711 kubelet[1962]: I1002 21:21:30.022973    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:21:30 ha-798711 kubelet[1962]: E1002 21:21:30.023427    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:21:31 ha-798711 kubelet[1962]: E1002 21:21:31.109326    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c27101d16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,LastTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:21:31 ha-798711 kubelet[1962]: E1002 21:21:31.200591    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:31 ha-798711 kubelet[1962]: E1002 21:21:31.226349    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:31 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:31 ha-798711 kubelet[1962]:  > podSandboxID="809957a7718c537a272955808ab83d0d209917c15901f264880b1842ca38ceb3"
	Oct 02 21:21:31 ha-798711 kubelet[1962]: E1002 21:21:31.226496    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:31 ha-798711 kubelet[1962]:         container kube-apiserver start failed in pod kube-apiserver-ha-798711_kube-system(4a40991d7a1715abba4b4bde50171ddc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:31 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:31 ha-798711 kubelet[1962]: E1002 21:21:31.226540    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-798711" podUID="4a40991d7a1715abba4b4bde50171ddc"
	Oct 02 21:21:33 ha-798711 kubelet[1962]: E1002 21:21:33.200697    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:21:33 ha-798711 kubelet[1962]: E1002 21:21:33.227461    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:21:33 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:33 ha-798711 kubelet[1962]:  > podSandboxID="76c61fa26c511dcbbaf5f791824244f525f21034929271894f96b97be53d12e4"
	Oct 02 21:21:33 ha-798711 kubelet[1962]: E1002 21:21:33.227582    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:21:33 ha-798711 kubelet[1962]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:21:33 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:21:33 ha-798711 kubelet[1962]: E1002 21:21:33.227629    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 6 (297.649709ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:21:34.589107  147618 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 node start m02 --alsologtostderr -v 5: exit status 85 (65.4892ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:34.653209  147729 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:34.653518  147729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:34.653530  147729 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:34.653534  147729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:34.653757  147729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:34.654053  147729 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:34.654430  147729 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:34.656410  147729 out.go:203] 
	W1002 21:21:34.657802  147729 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1002 21:21:34.657819  147729 out.go:285] * 
	* 
	W1002 21:21:34.662218  147729 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:21:34.663977  147729 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:424: I1002 21:21:34.653209  147729 out.go:360] Setting OutFile to fd 1 ...
I1002 21:21:34.653518  147729 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:21:34.653530  147729 out.go:374] Setting ErrFile to fd 2...
I1002 21:21:34.653534  147729 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:21:34.653757  147729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
I1002 21:21:34.654053  147729 mustload.go:65] Loading cluster: ha-798711
I1002 21:21:34.654430  147729 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:21:34.656410  147729 out.go:203] 
W1002 21:21:34.657802  147729 out.go:285] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1002 21:21:34.657819  147729 out.go:285] * 
* 
W1002 21:21:34.662218  147729 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1002 21:21:34.663977  147729 out.go:203] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-798711 node start m02 --alsologtostderr -v 5": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5: exit status 6 (288.405177ms)

                                                
                                                
-- stdout --
	ha-798711
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:34.714592  147741 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:34.714703  147741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:34.714720  147741 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:34.714726  147741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:34.714965  147741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:34.715149  147741 out.go:368] Setting JSON to false
	I1002 21:21:34.715178  147741 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:34.715334  147741 notify.go:220] Checking for updates...
	I1002 21:21:34.715490  147741 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:34.715502  147741 status.go:174] checking status of ha-798711 ...
	I1002 21:21:34.715965  147741 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:21:34.734185  147741 status.go:371] ha-798711 host status = "Running" (err=<nil>)
	I1002 21:21:34.734239  147741 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:34.734550  147741 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:21:34.752111  147741 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:34.752448  147741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:21:34.752497  147741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:21:34.769689  147741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:21:34.869197  147741 ssh_runner.go:195] Run: systemctl --version
	I1002 21:21:34.875429  147741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:21:34.888264  147741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:21:34.943037  147741 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:21:34.933608364 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 21:21:34.943473  147741 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:21:34.943500  147741 api_server.go:166] Checking apiserver status ...
	I1002 21:21:34.943532  147741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 21:21:34.953530  147741 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:21:34.953550  147741 status.go:463] ha-798711 apiserver status = Running (err=<nil>)
	I1002 21:21:34.953560  147741 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 21:21:34.957988   84100 retry.go:31] will retry after 628.774855ms: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5: exit status 6 (286.677856ms)

                                                
                                                
-- stdout --
	ha-798711
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:35.630293  147868 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:35.630585  147868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:35.630596  147868 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:35.630603  147868 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:35.630886  147868 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:35.631080  147868 out.go:368] Setting JSON to false
	I1002 21:21:35.631114  147868 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:35.631222  147868 notify.go:220] Checking for updates...
	I1002 21:21:35.631467  147868 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:35.631484  147868 status.go:174] checking status of ha-798711 ...
	I1002 21:21:35.631921  147868 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:21:35.651385  147868 status.go:371] ha-798711 host status = "Running" (err=<nil>)
	I1002 21:21:35.651453  147868 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:35.651822  147868 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:21:35.669009  147868 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:35.669299  147868 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:21:35.669353  147868 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:21:35.686890  147868 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:21:35.786182  147868 ssh_runner.go:195] Run: systemctl --version
	I1002 21:21:35.792510  147868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:21:35.804773  147868 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:21:35.859858  147868 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:21:35.849853116 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 21:21:35.860252  147868 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:21:35.860277  147868 api_server.go:166] Checking apiserver status ...
	I1002 21:21:35.860314  147868 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 21:21:35.870558  147868 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:21:35.870585  147868 status.go:463] ha-798711 apiserver status = Running (err=<nil>)
	I1002 21:21:35.870597  147868 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 21:21:35.874883   84100 retry.go:31] will retry after 751.705681ms: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5: exit status 6 (289.871609ms)

                                                
                                                
-- stdout --
	ha-798711
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:36.670724  147985 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:36.670854  147985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:36.670864  147985 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:36.670867  147985 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:36.671072  147985 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:36.671244  147985 out.go:368] Setting JSON to false
	I1002 21:21:36.671272  147985 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:36.671470  147985 notify.go:220] Checking for updates...
	I1002 21:21:36.671618  147985 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:36.671634  147985 status.go:174] checking status of ha-798711 ...
	I1002 21:21:36.672081  147985 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:21:36.691912  147985 status.go:371] ha-798711 host status = "Running" (err=<nil>)
	I1002 21:21:36.691945  147985 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:36.692279  147985 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:21:36.709987  147985 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:36.710248  147985 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:21:36.710300  147985 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:21:36.727908  147985 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:21:36.827295  147985 ssh_runner.go:195] Run: systemctl --version
	I1002 21:21:36.834023  147985 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:21:36.846700  147985 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:21:36.902030  147985 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:21:36.891344432 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 21:21:36.902440  147985 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:21:36.902465  147985 api_server.go:166] Checking apiserver status ...
	I1002 21:21:36.902502  147985 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 21:21:36.912602  147985 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:21:36.912621  147985 status.go:463] ha-798711 apiserver status = Running (err=<nil>)
	I1002 21:21:36.912633  147985 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 21:21:36.917007   84100 retry.go:31] will retry after 2.137742044s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5: exit status 6 (285.366934ms)

                                                
                                                
-- stdout --
	ha-798711
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:39.098110  148096 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:39.098218  148096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:39.098229  148096 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:39.098233  148096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:39.098450  148096 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:39.098620  148096 out.go:368] Setting JSON to false
	I1002 21:21:39.098647  148096 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:39.098769  148096 notify.go:220] Checking for updates...
	I1002 21:21:39.098984  148096 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:39.099021  148096 status.go:174] checking status of ha-798711 ...
	I1002 21:21:39.099408  148096 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:21:39.118065  148096 status.go:371] ha-798711 host status = "Running" (err=<nil>)
	I1002 21:21:39.118090  148096 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:39.118315  148096 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:21:39.136982  148096 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:39.137246  148096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:21:39.137297  148096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:21:39.155145  148096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:21:39.252830  148096 ssh_runner.go:195] Run: systemctl --version
	I1002 21:21:39.258968  148096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:21:39.270714  148096 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:21:39.324839  148096 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:21:39.314888616 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 21:21:39.325283  148096 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:21:39.325311  148096 api_server.go:166] Checking apiserver status ...
	I1002 21:21:39.325356  148096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 21:21:39.335655  148096 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:21:39.335680  148096 status.go:463] ha-798711 apiserver status = Running (err=<nil>)
	I1002 21:21:39.335694  148096 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 21:21:39.340660   84100 retry.go:31] will retry after 3.958227842s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5: exit status 6 (293.600414ms)

                                                
                                                
-- stdout --
	ha-798711
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:43.342592  148232 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:43.343269  148232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:43.343294  148232 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:43.343302  148232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:43.343788  148232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:43.344261  148232 out.go:368] Setting JSON to false
	I1002 21:21:43.344303  148232 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:43.344419  148232 notify.go:220] Checking for updates...
	I1002 21:21:43.344682  148232 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:43.344701  148232 status.go:174] checking status of ha-798711 ...
	I1002 21:21:43.345153  148232 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:21:43.363732  148232 status.go:371] ha-798711 host status = "Running" (err=<nil>)
	I1002 21:21:43.363775  148232 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:43.364050  148232 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:21:43.381720  148232 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:43.381973  148232 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:21:43.382009  148232 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:21:43.399711  148232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:21:43.500115  148232 ssh_runner.go:195] Run: systemctl --version
	I1002 21:21:43.506368  148232 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:21:43.518652  148232 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:21:43.578319  148232 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:21:43.568096992 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 21:21:43.578906  148232 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:21:43.578940  148232 api_server.go:166] Checking apiserver status ...
	I1002 21:21:43.578980  148232 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 21:21:43.589079  148232 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:21:43.589106  148232 status.go:463] ha-798711 apiserver status = Running (err=<nil>)
	I1002 21:21:43.589122  148232 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 21:21:43.593393   84100 retry.go:31] will retry after 4.166785521s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5: exit status 6 (284.743551ms)

                                                
                                                
-- stdout --
	ha-798711
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:47.806636  148367 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:47.806931  148367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:47.806943  148367 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:47.806949  148367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:47.807184  148367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:47.807403  148367 out.go:368] Setting JSON to false
	I1002 21:21:47.807436  148367 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:47.807475  148367 notify.go:220] Checking for updates...
	I1002 21:21:47.807846  148367 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:47.807864  148367 status.go:174] checking status of ha-798711 ...
	I1002 21:21:47.808255  148367 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:21:47.827227  148367 status.go:371] ha-798711 host status = "Running" (err=<nil>)
	I1002 21:21:47.827287  148367 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:47.827680  148367 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:21:47.845458  148367 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:47.845791  148367 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:21:47.845833  148367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:21:47.862766  148367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:21:47.961968  148367 ssh_runner.go:195] Run: systemctl --version
	I1002 21:21:47.968145  148367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:21:47.980107  148367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:21:48.033466  148367 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:21:48.023969413 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 21:21:48.033919  148367 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:21:48.033947  148367 api_server.go:166] Checking apiserver status ...
	I1002 21:21:48.033991  148367 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 21:21:48.043854  148367 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:21:48.043878  148367 status.go:463] ha-798711 apiserver status = Running (err=<nil>)
	I1002 21:21:48.043893  148367 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 21:21:48.048886   84100 retry.go:31] will retry after 7.45756938s: exit status 6
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5: exit status 6 (288.620856ms)

                                                
                                                
-- stdout --
	ha-798711
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:21:55.551343  148526 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:21:55.551590  148526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:55.551597  148526 out.go:374] Setting ErrFile to fd 2...
	I1002 21:21:55.551601  148526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:21:55.551785  148526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:21:55.551947  148526 out.go:368] Setting JSON to false
	I1002 21:21:55.551975  148526 mustload.go:65] Loading cluster: ha-798711
	I1002 21:21:55.552092  148526 notify.go:220] Checking for updates...
	I1002 21:21:55.552296  148526 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:21:55.552309  148526 status.go:174] checking status of ha-798711 ...
	I1002 21:21:55.552763  148526 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:21:55.571314  148526 status.go:371] ha-798711 host status = "Running" (err=<nil>)
	I1002 21:21:55.571341  148526 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:55.571662  148526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:21:55.589547  148526 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:21:55.589831  148526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:21:55.589875  148526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:21:55.608042  148526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:21:55.708224  148526 ssh_runner.go:195] Run: systemctl --version
	I1002 21:21:55.714887  148526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:21:55.727109  148526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:21:55.782689  148526 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:21:55.771656183 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 21:21:55.783183  148526 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:21:55.783214  148526 api_server.go:166] Checking apiserver status ...
	I1002 21:21:55.783260  148526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 21:21:55.793415  148526 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:21:55.793441  148526 status.go:463] ha-798711 apiserver status = Running (err=<nil>)
	I1002 21:21:55.793452  148526 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1002 21:21:55.798085   84100 retry.go:31] will retry after 14.763122929s: exit status 6
E1002 21:22:02.779994   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5: exit status 6 (294.851332ms)

                                                
                                                
-- stdout --
	ha-798711
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:22:10.612359  148711 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:22:10.612627  148711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:10.612637  148711 out.go:374] Setting ErrFile to fd 2...
	I1002 21:22:10.612641  148711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:10.612822  148711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:22:10.612975  148711 out.go:368] Setting JSON to false
	I1002 21:22:10.613000  148711 mustload.go:65] Loading cluster: ha-798711
	I1002 21:22:10.613097  148711 notify.go:220] Checking for updates...
	I1002 21:22:10.613305  148711 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:10.613318  148711 status.go:174] checking status of ha-798711 ...
	I1002 21:22:10.613750  148711 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:10.634480  148711 status.go:371] ha-798711 host status = "Running" (err=<nil>)
	I1002 21:22:10.634503  148711 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:22:10.634825  148711 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:10.653315  148711 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:22:10.653560  148711 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:22:10.653597  148711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:10.671172  148711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:10.770282  148711 ssh_runner.go:195] Run: systemctl --version
	I1002 21:22:10.776592  148711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:22:10.788596  148711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:10.848194  148711 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:22:10.838131288 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	E1002 21:22:10.848648  148711 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:10.848676  148711 api_server.go:166] Checking apiserver status ...
	I1002 21:22:10.848713  148711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 21:22:10.859259  148711 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:22:10.859303  148711 status.go:463] ha-798711 apiserver status = Running (err=<nil>)
	I1002 21:22:10.859319  148711 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Misconfigured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5" : exit status 6
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:11:12.231995655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff7695c206c75496a82b03b4cb8baaa7c43c19b01b7f03f1eecaf27d7d3cea7",
	            "SandboxKey": "/var/run/docker/netns/dff7695c206c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2f:81:cd:1d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "3f06532229560b3fca9b42b36cd7815a76d73449625385a23105f652639bf820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 6 (291.828892ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:22:11.160303  148834 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete         │ -p functional-012915                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	│ start          │ ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- rollout status deployment/busybox                                                          │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node add --alsologtostderr -v 5                                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node stop m02 --alsologtostderr -v 5                                                                  │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node start m02 --alsologtostderr -v 5                                                                 │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:11:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	I1002 21:19:28.029896  136530 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:21:59 ha-798711 crio[783]: time="2025-10-02T21:21:59.224101174Z" level=info msg="createCtr: removing container ee1236ca5b68eb31d18148505ef01891e175844eec83aed84c084f1eddf100f3" id=c07061ed-2e7e-4d9e-936a-1d7a1414b070 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:59 ha-798711 crio[783]: time="2025-10-02T21:21:59.224132005Z" level=info msg="createCtr: deleting container ee1236ca5b68eb31d18148505ef01891e175844eec83aed84c084f1eddf100f3 from storage" id=c07061ed-2e7e-4d9e-936a-1d7a1414b070 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:21:59 ha-798711 crio[783]: time="2025-10-02T21:21:59.226000527Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-798711_kube-system_4a40991d7a1715abba4b4bde50171ddc_0" id=c07061ed-2e7e-4d9e-936a-1d7a1414b070 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:04 ha-798711 crio[783]: time="2025-10-02T21:22:04.200893459Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=ee09eb31-17c0-4976-9ec2-63f5b254644d name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:22:04 ha-798711 crio[783]: time="2025-10-02T21:22:04.201729309Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=92e2c95e-efdb-44c5-8d14-e6a36ff8c49a name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:22:04 ha-798711 crio[783]: time="2025-10-02T21:22:04.202556622Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=bc5830c2-f1c2-40e9-81b1-cf21198c454d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:04 ha-798711 crio[783]: time="2025-10-02T21:22:04.202798087Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:04 ha-798711 crio[783]: time="2025-10-02T21:22:04.206074576Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:04 ha-798711 crio[783]: time="2025-10-02T21:22:04.206461643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:04 ha-798711 crio[783]: time="2025-10-02T21:22:04.220580551Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=bc5830c2-f1c2-40e9-81b1-cf21198c454d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:04 ha-798711 crio[783]: time="2025-10-02T21:22:04.221973737Z" level=info msg="createCtr: deleting container ID 883481a345f343c6d890c1293451fae62470fa2303d97729283ee3d4caee074f from idIndex" id=bc5830c2-f1c2-40e9-81b1-cf21198c454d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:04 ha-798711 crio[783]: time="2025-10-02T21:22:04.222006043Z" level=info msg="createCtr: removing container 883481a345f343c6d890c1293451fae62470fa2303d97729283ee3d4caee074f" id=bc5830c2-f1c2-40e9-81b1-cf21198c454d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:04 ha-798711 crio[783]: time="2025-10-02T21:22:04.222034939Z" level=info msg="createCtr: deleting container 883481a345f343c6d890c1293451fae62470fa2303d97729283ee3d4caee074f from storage" id=bc5830c2-f1c2-40e9-81b1-cf21198c454d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:04 ha-798711 crio[783]: time="2025-10-02T21:22:04.224136169Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=bc5830c2-f1c2-40e9-81b1-cf21198c454d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.200369868Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=a65339f6-fa8c-4bae-b859-64905e4ff5d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.201327557Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=f3d20da3-c182-41d5-84aa-58f7102c0885 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.20225716Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-798711/kube-scheduler" id=70dd4caa-e50c-4250-a37c-5ba8a79faa74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.202509443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.205819236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.206213034Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.222802114Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=70dd4caa-e50c-4250-a37c-5ba8a79faa74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.224415736Z" level=info msg="createCtr: deleting container ID 8ca0a1dcb78b23de2e1475590735c0dca36ca0ae706ae352100b9571670c0675 from idIndex" id=70dd4caa-e50c-4250-a37c-5ba8a79faa74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.224457034Z" level=info msg="createCtr: removing container 8ca0a1dcb78b23de2e1475590735c0dca36ca0ae706ae352100b9571670c0675" id=70dd4caa-e50c-4250-a37c-5ba8a79faa74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.224495367Z" level=info msg="createCtr: deleting container 8ca0a1dcb78b23de2e1475590735c0dca36ca0ae706ae352100b9571670c0675 from storage" id=70dd4caa-e50c-4250-a37c-5ba8a79faa74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.227277647Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=70dd4caa-e50c-4250-a37c-5ba8a79faa74 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:22:11.743941    4624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:22:11.744491    4624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:22:11.746024    4624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:22:11.746441    4624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:22:11.747586    4624 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:22:11 up  3:04,  0 user,  load average: 0.15, 0.10, 0.15
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:21:59 ha-798711 kubelet[1962]: E1002 21:21:59.226458    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-798711" podUID="4a40991d7a1715abba4b4bde50171ddc"
	Oct 02 21:22:01 ha-798711 kubelet[1962]: E1002 21:22:01.112669    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c27101d16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,LastTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:22:03 ha-798711 kubelet[1962]: E1002 21:22:03.627458    1962 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-798711&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 21:22:04 ha-798711 kubelet[1962]: E1002 21:22:04.200405    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:22:04 ha-798711 kubelet[1962]: E1002 21:22:04.224435    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:22:04 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:22:04 ha-798711 kubelet[1962]:  > podSandboxID="55af7e8787f2a5119f69d0eccdb6fb36e84f93e4a4a878ed95b1aed61e1818f5"
	Oct 02 21:22:04 ha-798711 kubelet[1962]: E1002 21:22:04.224537    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:22:04 ha-798711 kubelet[1962]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:22:04 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:22:04 ha-798711 kubelet[1962]: E1002 21:22:04.224569    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:22:04 ha-798711 kubelet[1962]: E1002 21:22:04.519487    1962 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 02 21:22:04 ha-798711 kubelet[1962]: E1002 21:22:04.848908    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:22:05 ha-798711 kubelet[1962]: I1002 21:22:05.033511    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:22:05 ha-798711 kubelet[1962]: E1002 21:22:05.033954    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:22:07 ha-798711 kubelet[1962]: E1002 21:22:07.224729    1962 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-798711\" not found"
	Oct 02 21:22:10 ha-798711 kubelet[1962]: E1002 21:22:10.199934    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:22:10 ha-798711 kubelet[1962]: E1002 21:22:10.227617    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:22:10 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:22:10 ha-798711 kubelet[1962]:  > podSandboxID="29268766c938de77a88251d1f04eca5dd36f8e164ff499f61eaf1fca7ad11042"
	Oct 02 21:22:10 ha-798711 kubelet[1962]: E1002 21:22:10.227721    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:22:10 ha-798711 kubelet[1962]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:22:10 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:22:10 ha-798711 kubelet[1962]: E1002 21:22:10.227774    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:22:11 ha-798711 kubelet[1962]: E1002 21:22:11.113873    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c27101d16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,LastTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 6 (295.583507ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:22:12.118430  149160 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (37.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-798711" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-798711\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-798711\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-798711\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-798711" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-798711\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-798711\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-798711\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-linux-amd64 profile list --
output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 137093,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:11:12.231995655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dff7695c206c75496a82b03b4cb8baaa7c43c19b01b7f03f1eecaf27d7d3cea7",
	            "SandboxKey": "/var/run/docker/netns/dff7695c206c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:2f:81:cd:1d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "3f06532229560b3fca9b42b36cd7815a76d73449625385a23105f652639bf820",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 6 (290.026458ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:22:12.743107  149416 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                      ARGS                                                       │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-012915 update-context --alsologtostderr -v=2                                                         │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ image          │ functional-012915 image ls                                                                                      │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:07 UTC │ 02 Oct 25 21:07 UTC │
	│ delete         │ -p functional-012915                                                                                            │ functional-012915 │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │ 02 Oct 25 21:11 UTC │
	│ start          │ ha-798711 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:11 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- rollout status deployment/busybox                                                          │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                                            │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                                     │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl        │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                           │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node add --alsologtostderr -v 5                                                                       │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node stop m02 --alsologtostderr -v 5                                                                  │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node           │ ha-798711 node start m02 --alsologtostderr -v 5                                                                 │ ha-798711         │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:11:07
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:11:07.011268  136530 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:11:07.011538  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011547  136530 out.go:374] Setting ErrFile to fd 2...
	I1002 21:11:07.011551  136530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:11:07.011722  136530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:11:07.012227  136530 out.go:368] Setting JSON to false
	I1002 21:11:07.013179  136530 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10408,"bootTime":1759429059,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:11:07.013269  136530 start.go:140] virtualization: kvm guest
	I1002 21:11:07.015274  136530 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:11:07.017158  136530 notify.go:220] Checking for updates...
	I1002 21:11:07.017163  136530 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:11:07.018762  136530 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:11:07.020199  136530 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:11:07.021595  136530 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:11:07.026346  136530 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:11:07.027772  136530 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:11:07.029494  136530 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:11:07.053451  136530 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:11:07.053557  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.107710  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.098091423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.107845  136530 docker.go:318] overlay module found
	I1002 21:11:07.110616  136530 out.go:179] * Using the docker driver based on user configuration
	I1002 21:11:07.111896  136530 start.go:304] selected driver: docker
	I1002 21:11:07.111910  136530 start.go:924] validating driver "docker" against <nil>
	I1002 21:11:07.111921  136530 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:11:07.112470  136530 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:11:07.169495  136530 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:11:07.159474228 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:11:07.169726  136530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:11:07.169990  136530 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:11:07.171958  136530 out.go:179] * Using Docker driver with root privileges
	I1002 21:11:07.173343  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:07.173441  136530 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1002 21:11:07.173456  136530 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:11:07.173542  136530 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPaus
eInterval:1m0s}
	I1002 21:11:07.175120  136530 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:11:07.176484  136530 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:11:07.177782  136530 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:11:07.178953  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.178998  136530 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:11:07.179008  136530 cache.go:58] Caching tarball of preloaded images
	I1002 21:11:07.179055  136530 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:11:07.179140  136530 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:11:07.179155  136530 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:11:07.179617  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:07.179646  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json: {Name:mk24e10840872212e0c4804b5206e3dd1c56c3fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:07.202297  136530 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:11:07.202321  136530 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:11:07.202340  136530 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:11:07.202386  136530 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:11:07.202521  136530 start.go:364] duration metric: took 110.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:11:07.202564  136530 start.go:93] Provisioning new machine with config: &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:11:07.202671  136530 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:11:07.205585  136530 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1002 21:11:07.205847  136530 start.go:159] libmachine.API.Create for "ha-798711" (driver="docker")
	I1002 21:11:07.205884  136530 client.go:168] LocalClient.Create starting
	I1002 21:11:07.205984  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:11:07.206019  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206032  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206090  136530 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:11:07.206111  136530 main.go:141] libmachine: Decoding PEM data...
	I1002 21:11:07.206120  136530 main.go:141] libmachine: Parsing certificate...
	I1002 21:11:07.206477  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:11:07.224617  136530 cli_runner.go:211] docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:11:07.224705  136530 network_create.go:284] running [docker network inspect ha-798711] to gather additional debugging logs...
	I1002 21:11:07.224729  136530 cli_runner.go:164] Run: docker network inspect ha-798711
	W1002 21:11:07.242107  136530 cli_runner.go:211] docker network inspect ha-798711 returned with exit code 1
	I1002 21:11:07.242141  136530 network_create.go:287] error running [docker network inspect ha-798711]: docker network inspect ha-798711: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-798711 not found
	I1002 21:11:07.242158  136530 network_create.go:289] output of [docker network inspect ha-798711]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-798711 not found
	
	** /stderr **
	I1002 21:11:07.242304  136530 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:07.261625  136530 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e30950}
	I1002 21:11:07.261663  136530 network_create.go:124] attempt to create docker network ha-798711 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1002 21:11:07.261714  136530 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-798711 ha-798711
	I1002 21:11:07.323535  136530 network_create.go:108] docker network ha-798711 192.168.49.0/24 created
	I1002 21:11:07.323569  136530 kic.go:121] calculated static IP "192.168.49.2" for the "ha-798711" container
	I1002 21:11:07.323626  136530 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:11:07.340067  136530 cli_runner.go:164] Run: docker volume create ha-798711 --label name.minikube.sigs.k8s.io=ha-798711 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:11:07.358599  136530 oci.go:103] Successfully created a docker volume ha-798711
	I1002 21:11:07.358674  136530 cli_runner.go:164] Run: docker run --rm --name ha-798711-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --entrypoint /usr/bin/test -v ha-798711:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:11:07.740312  136530 oci.go:107] Successfully prepared a docker volume ha-798711
	I1002 21:11:07.740362  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:07.740387  136530 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:11:07.740452  136530 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:11:12.127474  136530 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-798711:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.386980184s)
	I1002 21:11:12.127508  136530 kic.go:203] duration metric: took 4.387119309s to extract preloaded images to volume ...
	W1002 21:11:12.127599  136530 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:11:12.127639  136530 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:11:12.127684  136530 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:11:12.180864  136530 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-798711 --name ha-798711 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-798711 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-798711 --network ha-798711 --ip 192.168.49.2 --volume ha-798711:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:11:12.449647  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Running}}
	I1002 21:11:12.468545  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.486700  136530 cli_runner.go:164] Run: docker exec ha-798711 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:11:12.530485  136530 oci.go:144] the created container "ha-798711" has a running status.
	I1002 21:11:12.530513  136530 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa...
	I1002 21:11:12.621877  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1002 21:11:12.621918  136530 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:11:12.647322  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.667608  136530 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:11:12.667635  136530 kic_runner.go:114] Args: [docker exec --privileged ha-798711 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:11:12.709963  136530 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:11:12.733453  136530 machine.go:93] provisionDockerMachine start ...
	I1002 21:11:12.733557  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.758977  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.759417  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.759445  136530 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:11:12.909642  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:12.909674  136530 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:11:12.909755  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:12.928113  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:12.928388  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:12.928406  136530 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:11:13.083355  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:11:13.083434  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.101793  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.102040  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.102060  136530 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:11:13.247306  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:11:13.247336  136530 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:11:13.247358  136530 ubuntu.go:190] setting up certificates
	I1002 21:11:13.247372  136530 provision.go:84] configureAuth start
	I1002 21:11:13.247436  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:13.266674  136530 provision.go:143] copyHostCerts
	I1002 21:11:13.266715  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266787  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:11:13.266800  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:11:13.266883  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:11:13.267006  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267035  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:11:13.267041  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:11:13.267084  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:11:13.267169  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267198  136530 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:11:13.267207  136530 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:11:13.267246  136530 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:11:13.267341  136530 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:11:13.727261  136530 provision.go:177] copyRemoteCerts
	I1002 21:11:13.727326  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:11:13.727362  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.745169  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:13.846909  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:11:13.846984  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:11:13.865470  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:11:13.865529  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:11:13.882643  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:11:13.882721  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:11:13.900201  136530 provision.go:87] duration metric: took 652.795971ms to configureAuth
	I1002 21:11:13.900236  136530 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:11:13.900416  136530 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:11:13.900542  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:13.918465  136530 main.go:141] libmachine: Using SSH client type: native
	I1002 21:11:13.918677  136530 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I1002 21:11:13.918695  136530 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:11:14.172069  136530 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:11:14.172104  136530 machine.go:96] duration metric: took 1.438623172s to provisionDockerMachine
	I1002 21:11:14.172118  136530 client.go:171] duration metric: took 6.966225105s to LocalClient.Create
	I1002 21:11:14.172141  136530 start.go:167] duration metric: took 6.966294745s to libmachine.API.Create "ha-798711"
	I1002 21:11:14.172154  136530 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:11:14.172167  136530 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:11:14.172258  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:11:14.172299  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.189540  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.292561  136530 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:11:14.296077  136530 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:11:14.296117  136530 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:11:14.296131  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:11:14.296196  136530 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:11:14.296316  136530 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:11:14.296329  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:11:14.296445  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:11:14.303907  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:14.323944  136530 start.go:296] duration metric: took 151.771678ms for postStartSetup
	I1002 21:11:14.324366  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.343445  136530 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:11:14.343729  136530 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:11:14.343800  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.360796  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.459696  136530 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:11:14.463988  136530 start.go:128] duration metric: took 7.26128699s to createHost
	I1002 21:11:14.464016  136530 start.go:83] releasing machines lock for "ha-798711", held for 7.261478527s
	I1002 21:11:14.464096  136530 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:11:14.481536  136530 ssh_runner.go:195] Run: cat /version.json
	I1002 21:11:14.481598  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.481603  136530 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:11:14.481658  136530 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:11:14.500071  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.500226  136530 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:11:14.652372  136530 ssh_runner.go:195] Run: systemctl --version
	I1002 21:11:14.658964  136530 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:11:14.692877  136530 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:11:14.697420  136530 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:11:14.697492  136530 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:11:14.723387  136530 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:11:14.723415  136530 start.go:495] detecting cgroup driver to use...
	I1002 21:11:14.723456  136530 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:11:14.723515  136530 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:11:14.739478  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:11:14.751376  136530 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:11:14.751423  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:11:14.766955  136530 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:11:14.783764  136530 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:11:14.863895  136530 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:11:14.949306  136530 docker.go:234] disabling docker service ...
	I1002 21:11:14.949379  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:11:14.967590  136530 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:11:14.979658  136530 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:11:15.061657  136530 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:11:15.140393  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:11:15.152601  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:11:15.166850  136530 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:11:15.166904  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.177169  136530 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:11:15.177235  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.186026  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.194576  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.203171  136530 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:11:15.211190  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.219965  136530 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.233033  136530 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:11:15.241455  136530 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:11:15.248556  136530 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:11:15.255449  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.330444  136530 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:11:15.432787  136530 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:11:15.432852  136530 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:11:15.436668  136530 start.go:563] Will wait 60s for crictl version
	I1002 21:11:15.436715  136530 ssh_runner.go:195] Run: which crictl
	I1002 21:11:15.440060  136530 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:11:15.463714  136530 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:11:15.463802  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.491109  136530 ssh_runner.go:195] Run: crio --version
	I1002 21:11:15.521346  136530 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:11:15.522699  136530 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:11:15.541190  136530 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:11:15.545646  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.556771  136530 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:11:15.556876  136530 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:11:15.556929  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.586799  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.586820  136530 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:11:15.586870  136530 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:11:15.612661  136530 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:11:15.612684  136530 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:11:15.612693  136530 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:11:15.612798  136530 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:11:15.612863  136530 ssh_runner.go:195] Run: crio config
	I1002 21:11:15.658979  136530 cni.go:84] Creating CNI manager for ""
	I1002 21:11:15.659007  136530 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:11:15.659028  136530 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:11:15.659049  136530 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:11:15.659175  136530 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:11:15.659204  136530 kube-vip.go:115] generating kube-vip config ...
	I1002 21:11:15.659248  136530 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1002 21:11:15.671055  136530 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:11:15.671151  136530 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1002 21:11:15.671194  136530 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:11:15.678899  136530 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:11:15.678959  136530 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1002 21:11:15.686596  136530 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:11:15.698707  136530 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:11:15.713602  136530 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:11:15.725761  136530 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I1002 21:11:15.739455  136530 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1002 21:11:15.742986  136530 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:11:15.752848  136530 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:11:15.830015  136530 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:11:15.855427  136530 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:11:15.855453  136530 certs.go:195] generating shared ca certs ...
	I1002 21:11:15.855474  136530 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.855659  136530 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:11:15.855698  136530 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:11:15.855706  136530 certs.go:257] generating profile certs ...
	I1002 21:11:15.855782  136530 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:11:15.855798  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt with IP's: []
	I1002 21:11:15.894594  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt ...
	I1002 21:11:15.894623  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt: {Name:mk8e7a357f870c9f30155ac231a0bbaccdc190b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894823  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key ...
	I1002 21:11:15.894839  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key: {Name:mk34480180ee6e1eba7371743e4ace15b5883cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:15.894936  136530 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab
	I1002 21:11:15.894951  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I1002 21:11:16.173425  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab ...
	I1002 21:11:16.173460  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab: {Name:mk8625adfa0e7523b2d4884a0a83b31b2e24bf31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173648  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab ...
	I1002 21:11:16.173665  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab: {Name:mka85192308ee660701dafde1f5bfabc87a0bfa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.173792  136530 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:11:16.173928  136530 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.0c362cab -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:11:16.174035  136530 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:11:16.174057  136530 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt with IP's: []
	I1002 21:11:16.292345  136530 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt ...
	I1002 21:11:16.292380  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt: {Name:mk08a919a359f5d200d01f0f786073287185c56d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292568  136530 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key ...
	I1002 21:11:16.292581  136530 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key: {Name:mk73f1fe8608c1e27d87dbaae07482a5181b8920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:11:16.292674  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:11:16.292694  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:11:16.292710  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:11:16.292727  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:11:16.292756  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:11:16.292772  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:11:16.292787  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:11:16.292801  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:11:16.292860  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:11:16.292897  136530 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:11:16.292908  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:11:16.292934  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:11:16.292959  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:11:16.292988  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:11:16.293030  136530 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:11:16.293059  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.293075  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.293090  136530 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.293703  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:11:16.311883  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:11:16.328993  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:11:16.345807  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:11:16.362863  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1002 21:11:16.380173  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1002 21:11:16.396882  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:11:16.414157  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:11:16.430933  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:11:16.449849  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:11:16.466901  136530 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:11:16.483766  136530 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:11:16.496034  136530 ssh_runner.go:195] Run: openssl version
	I1002 21:11:16.502181  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:11:16.510522  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514249  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.514304  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:11:16.548241  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:11:16.557232  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:11:16.565404  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.568992  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.569048  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:11:16.602419  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:11:16.611109  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:11:16.619339  136530 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.622995  136530 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.623058  136530 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:11:16.657469  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:11:16.667508  136530 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:11:16.671500  136530 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:11:16.671555  136530 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:11:16.671638  136530 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:11:16.671682  136530 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:11:16.699951  136530 cri.go:89] found id: ""
	I1002 21:11:16.700005  136530 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:11:16.707922  136530 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:11:16.715779  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:11:16.715832  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:11:16.723507  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:11:16.723531  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:11:16.723583  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:11:16.730994  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:11:16.731047  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:11:16.738363  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:11:16.745807  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:11:16.745876  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:11:16.753683  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.761354  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:11:16.761409  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:11:16.768792  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:11:16.776594  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:11:16.776651  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:11:16.784834  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:11:16.822809  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:11:16.822871  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:11:16.843063  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:11:16.843152  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:11:16.843215  136530 kubeadm.go:318] OS: Linux
	I1002 21:11:16.843291  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:11:16.843360  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:11:16.843433  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:11:16.843517  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:11:16.843603  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:11:16.843671  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:11:16.843774  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:11:16.843870  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:11:16.900700  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:11:16.900891  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:11:16.901046  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:11:16.908833  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:11:16.910889  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:11:16.910995  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:11:16.911106  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:11:16.981451  136530 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:11:18.118250  136530 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:11:18.192277  136530 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:11:18.248603  136530 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:11:18.551414  136530 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:11:18.551561  136530 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:18.850112  136530 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:11:18.850237  136530 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1002 21:11:19.121059  136530 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:11:19.732990  136530 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:11:20.056927  136530 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:11:20.057029  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:11:20.224967  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:11:20.390401  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:11:20.461849  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:11:20.639186  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:11:20.972284  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:11:20.972838  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:11:20.975010  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:11:20.977778  136530 out.go:252]   - Booting up control plane ...
	I1002 21:11:20.977902  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:11:20.977988  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:11:20.978650  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:11:20.991976  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:11:20.992071  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:11:20.998646  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:11:20.998833  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:11:20.998876  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:11:21.092207  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:11:21.092397  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:11:21.592884  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 500.946087ms
	I1002 21:11:21.595869  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:11:21.595984  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:11:21.596132  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:11:21.596258  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:15:21.597851  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	I1002 21:15:21.598116  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	I1002 21:15:21.598335  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	I1002 21:15:21.598356  136530 kubeadm.go:318] 
	I1002 21:15:21.598623  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:15:21.598844  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:15:21.599128  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:15:21.599394  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:15:21.599566  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:15:21.599769  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:15:21.599787  136530 kubeadm.go:318] 
	I1002 21:15:21.602259  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:15:21.602408  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:15:21.603181  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:15:21.603291  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:15:21.603455  136530 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ha-798711 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 500.946087ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.001023651s
	[control-plane-check] kube-apiserver is not healthy after 4m0.001135139s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001461758s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:15:21.603561  136530 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:15:24.363820  136530 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.760231298s)
	I1002 21:15:24.363901  136530 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:15:24.377218  136530 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:15:24.377286  136530 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:15:24.385552  136530 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:15:24.385571  136530 kubeadm.go:157] found existing configuration files:
	
	I1002 21:15:24.385623  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:15:24.393473  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:15:24.393531  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:15:24.401360  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:15:24.408975  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:15:24.409037  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:15:24.416503  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.424160  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:15:24.424223  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:15:24.431560  136530 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:15:24.439161  136530 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:15:24.439211  136530 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:15:24.446680  136530 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:15:24.482142  136530 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:15:24.482212  136530 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:15:24.502342  136530 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:15:24.502404  136530 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:15:24.502483  136530 kubeadm.go:318] OS: Linux
	I1002 21:15:24.502557  136530 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:15:24.502650  136530 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:15:24.502725  136530 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:15:24.502814  136530 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:15:24.502885  136530 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:15:24.502966  136530 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:15:24.503032  136530 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:15:24.503109  136530 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:15:24.562924  136530 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:15:24.563090  136530 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:15:24.563218  136530 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:15:24.569709  136530 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:15:24.573671  136530 out.go:252]   - Generating certificates and keys ...
	I1002 21:15:24.573793  136530 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:15:24.573893  136530 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:15:24.573988  136530 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:15:24.574068  136530 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:15:24.574153  136530 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:15:24.574220  136530 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:15:24.574303  136530 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:15:24.574387  136530 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:15:24.574491  136530 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:15:24.574597  136530 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:15:24.574657  136530 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:15:24.574765  136530 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:15:24.789348  136530 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:15:24.868977  136530 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:15:25.024868  136530 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:15:25.213318  136530 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:15:25.975554  136530 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:15:25.975999  136530 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:15:25.978252  136530 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:15:25.980671  136530 out.go:252]   - Booting up control plane ...
	I1002 21:15:25.980791  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:15:25.980867  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:15:25.981238  136530 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:15:25.994378  136530 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:15:25.994489  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:15:26.001065  136530 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:15:26.001301  136530 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:15:26.001351  136530 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:15:26.101609  136530 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:15:26.101814  136530 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:15:27.602761  136530 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501180143s
	I1002 21:15:27.605447  136530 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:15:27.605570  136530 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1002 21:15:27.605712  136530 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:15:27.605835  136530 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:19:27.606107  136530 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	I1002 21:19:27.606234  136530 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	I1002 21:19:27.606393  136530 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	I1002 21:19:27.606434  136530 kubeadm.go:318] 
	I1002 21:19:27.606511  136530 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:19:27.606647  136530 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:19:27.606816  136530 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:19:27.606941  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:19:27.607045  136530 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:19:27.607158  136530 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:19:27.607169  136530 kubeadm.go:318] 
	I1002 21:19:27.610429  136530 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:19:27.610590  136530 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:19:27.611335  136530 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1002 21:19:27.611411  136530 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:19:27.611500  136530 kubeadm.go:402] duration metric: took 8m10.939948553s to StartCluster
	I1002 21:19:27.611564  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:19:27.611626  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:19:27.638989  136530 cri.go:89] found id: ""
	I1002 21:19:27.639037  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.639049  136530 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:19:27.639059  136530 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:19:27.639126  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:19:27.665136  136530 cri.go:89] found id: ""
	I1002 21:19:27.665166  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.665178  136530 logs.go:284] No container was found matching "etcd"
	I1002 21:19:27.665187  136530 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:19:27.665244  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:19:27.691697  136530 cri.go:89] found id: ""
	I1002 21:19:27.691724  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.691731  136530 logs.go:284] No container was found matching "coredns"
	I1002 21:19:27.691752  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:19:27.691809  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:19:27.717719  136530 cri.go:89] found id: ""
	I1002 21:19:27.717762  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.717772  136530 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:19:27.717781  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:19:27.717844  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:19:27.743976  136530 cri.go:89] found id: ""
	I1002 21:19:27.744005  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.744016  136530 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:19:27.744024  136530 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:19:27.744087  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:19:27.770435  136530 cri.go:89] found id: ""
	I1002 21:19:27.770460  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.770474  136530 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:19:27.770481  136530 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:19:27.770546  136530 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:19:27.796208  136530 cri.go:89] found id: ""
	I1002 21:19:27.796238  136530 logs.go:282] 0 containers: []
	W1002 21:19:27.796248  136530 logs.go:284] No container was found matching "kindnet"
	I1002 21:19:27.796258  136530 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:19:27.796272  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:19:27.855749  136530 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:19:27.849064    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.849555    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851130    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.851572    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:19:27.852813    2565 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:19:27.855789  136530 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:19:27.855805  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:19:27.914361  136530 logs.go:123] Gathering logs for container status ...
	I1002 21:19:27.914404  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:19:27.942759  136530 logs.go:123] Gathering logs for kubelet ...
	I1002 21:19:27.942787  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:19:28.006110  136530 logs.go:123] Gathering logs for dmesg ...
	I1002 21:19:28.006146  136530 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:19:28.020458  136530 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:19:28.020521  136530 out.go:285] * 
	W1002 21:19:28.020588  136530 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.020605  136530 out.go:285] * 
	W1002 21:19:28.022482  136530 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:19:28.026615  136530 out.go:203] 
	W1002 21:19:28.028062  136530 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501180143s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000291044s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000511243s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000722922s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": context deadline exceeded, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:19:28.028092  136530 out.go:285] * 
	I1002 21:19:28.029896  136530 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.224457034Z" level=info msg="createCtr: removing container 8ca0a1dcb78b23de2e1475590735c0dca36ca0ae706ae352100b9571670c0675" id=70dd4caa-e50c-4250-a37c-5ba8a79faa74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.224495367Z" level=info msg="createCtr: deleting container 8ca0a1dcb78b23de2e1475590735c0dca36ca0ae706ae352100b9571670c0675 from storage" id=70dd4caa-e50c-4250-a37c-5ba8a79faa74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:10 ha-798711 crio[783]: time="2025-10-02T21:22:10.227277647Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=70dd4caa-e50c-4250-a37c-5ba8a79faa74 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.200979291Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e34e03b6-6b32-44c3-ba11-b30311107617 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.201128037Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=4f12b2a0-6213-4ee9-a5db-3ba766fbb293 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.202011061Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=2bbcf4cf-9b9a-4f89-8bd1-7d5e1bcbe8fb name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.202025134Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=91d70c0d-2273-47b1-9788-4c6e8e170180 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.203035716Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-798711/kube-controller-manager" id=015ce4db-a366-4f73-890f-1fd4dea550d1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.203207594Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-798711/kube-apiserver" id=492047cb-58f0-4ed1-95aa-492da5b85c1c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.20337486Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.203442706Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.208789553Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.209418708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.210806237Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.211399735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.225550503Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=492047cb-58f0-4ed1-95aa-492da5b85c1c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.22579647Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=015ce4db-a366-4f73-890f-1fd4dea550d1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.227148081Z" level=info msg="createCtr: deleting container ID 9d28024dce38ce929d6c8d0563216d82cdf7a08cf61d0c46269a72f7bd8e9917 from idIndex" id=492047cb-58f0-4ed1-95aa-492da5b85c1c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.227189788Z" level=info msg="createCtr: removing container 9d28024dce38ce929d6c8d0563216d82cdf7a08cf61d0c46269a72f7bd8e9917" id=492047cb-58f0-4ed1-95aa-492da5b85c1c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.227211859Z" level=info msg="createCtr: deleting container ID db990059658638ecfa08ebcf7ae0d2e895cdec055396759a1bd1f891e2aa837d from idIndex" id=015ce4db-a366-4f73-890f-1fd4dea550d1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.227237902Z" level=info msg="createCtr: removing container db990059658638ecfa08ebcf7ae0d2e895cdec055396759a1bd1f891e2aa837d" id=015ce4db-a366-4f73-890f-1fd4dea550d1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.227243524Z" level=info msg="createCtr: deleting container 9d28024dce38ce929d6c8d0563216d82cdf7a08cf61d0c46269a72f7bd8e9917 from storage" id=492047cb-58f0-4ed1-95aa-492da5b85c1c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.227266667Z" level=info msg="createCtr: deleting container db990059658638ecfa08ebcf7ae0d2e895cdec055396759a1bd1f891e2aa837d from storage" id=015ce4db-a366-4f73-890f-1fd4dea550d1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.230670241Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-798711_kube-system_4a40991d7a1715abba4b4bde50171ddc_0" id=492047cb-58f0-4ed1-95aa-492da5b85c1c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:22:12 ha-798711 crio[783]: time="2025-10-02T21:22:12.231062025Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=015ce4db-a366-4f73-890f-1fd4dea550d1 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:22:13.329273    4806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:22:13.329837    4806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:22:13.331549    4806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:22:13.332026    4806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:22:13.333574    4806 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:22:13 up  3:04,  0 user,  load average: 0.14, 0.10, 0.15
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:22:10 ha-798711 kubelet[1962]:  > podSandboxID="29268766c938de77a88251d1f04eca5dd36f8e164ff499f61eaf1fca7ad11042"
	Oct 02 21:22:10 ha-798711 kubelet[1962]: E1002 21:22:10.227721    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:22:10 ha-798711 kubelet[1962]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:22:10 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:22:10 ha-798711 kubelet[1962]: E1002 21:22:10.227774    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:22:11 ha-798711 kubelet[1962]: E1002 21:22:11.113873    1962 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac91c27101d16  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,LastTimestamp:2025-10-02 21:15:27.19323471 +0000 UTC m=+1.090778035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:22:11 ha-798711 kubelet[1962]: E1002 21:22:11.850209    1962 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:22:12 ha-798711 kubelet[1962]: I1002 21:22:12.035765    1962 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:22:12 ha-798711 kubelet[1962]: E1002 21:22:12.036162    1962 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:22:12 ha-798711 kubelet[1962]: E1002 21:22:12.200543    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:22:12 ha-798711 kubelet[1962]: E1002 21:22:12.200721    1962 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:22:12 ha-798711 kubelet[1962]: E1002 21:22:12.231021    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:22:12 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:22:12 ha-798711 kubelet[1962]:  > podSandboxID="809957a7718c537a272955808ab83d0d209917c15901f264880b1842ca38ceb3"
	Oct 02 21:22:12 ha-798711 kubelet[1962]: E1002 21:22:12.231158    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:22:12 ha-798711 kubelet[1962]:         container kube-apiserver start failed in pod kube-apiserver-ha-798711_kube-system(4a40991d7a1715abba4b4bde50171ddc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:22:12 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:22:12 ha-798711 kubelet[1962]: E1002 21:22:12.231204    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-798711" podUID="4a40991d7a1715abba4b4bde50171ddc"
	Oct 02 21:22:12 ha-798711 kubelet[1962]: E1002 21:22:12.231317    1962 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:22:12 ha-798711 kubelet[1962]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:22:12 ha-798711 kubelet[1962]:  > podSandboxID="76c61fa26c511dcbbaf5f791824244f525f21034929271894f96b97be53d12e4"
	Oct 02 21:22:12 ha-798711 kubelet[1962]: E1002 21:22:12.231396    1962 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:22:12 ha-798711 kubelet[1962]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:22:12 ha-798711 kubelet[1962]:  > logger="UnhandledError"
	Oct 02 21:22:12 ha-798711 kubelet[1962]: E1002 21:22:12.232212    1962 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 6 (302.359195ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:22:13.716607  149735 status.go:458] kubeconfig endpoint: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-798711 stop --alsologtostderr -v 5: (1.208974226s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 start --wait true --alsologtostderr -v 5
E1002 21:27:02.775525   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 start --wait true --alsologtostderr -v 5: exit status 80 (6m7.246773439s)

                                                
                                                
-- stdout --
	* [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:22:15.033227  150075 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:22:15.033502  150075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:15.033514  150075 out.go:374] Setting ErrFile to fd 2...
	I1002 21:22:15.033519  150075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:15.033759  150075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:22:15.034237  150075 out.go:368] Setting JSON to false
	I1002 21:22:15.035218  150075 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11076,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:22:15.035319  150075 start.go:140] virtualization: kvm guest
	I1002 21:22:15.037453  150075 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:22:15.038781  150075 notify.go:220] Checking for updates...
	I1002 21:22:15.038868  150075 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:22:15.040220  150075 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:22:15.041802  150075 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:15.043133  150075 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:22:15.044244  150075 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:22:15.047976  150075 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:22:15.049912  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:15.050054  150075 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:22:15.074981  150075 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:22:15.075111  150075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:15.135266  150075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:22:15.124689773 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:22:15.135396  150075 docker.go:318] overlay module found
	I1002 21:22:15.137632  150075 out.go:179] * Using the docker driver based on existing profile
	I1002 21:22:15.139159  150075 start.go:304] selected driver: docker
	I1002 21:22:15.139180  150075 start.go:924] validating driver "docker" against &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:22:15.139298  150075 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:22:15.139392  150075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:15.200879  150075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:22:15.189950344 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:22:15.201570  150075 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:22:15.201600  150075 cni.go:84] Creating CNI manager for ""
	I1002 21:22:15.201660  150075 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:22:15.201704  150075 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 21:22:15.204229  150075 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:22:15.206112  150075 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:22:15.207484  150075 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:22:15.208801  150075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:22:15.208851  150075 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:22:15.208877  150075 cache.go:58] Caching tarball of preloaded images
	I1002 21:22:15.208924  150075 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:22:15.208992  150075 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:22:15.209009  150075 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:22:15.209155  150075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:22:15.230453  150075 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:22:15.230479  150075 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:22:15.230497  150075 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:22:15.230539  150075 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:22:15.230610  150075 start.go:364] duration metric: took 49.005µs to acquireMachinesLock for "ha-798711"
	I1002 21:22:15.230632  150075 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:22:15.230641  150075 fix.go:54] fixHost starting: 
	I1002 21:22:15.230913  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:15.248494  150075 fix.go:112] recreateIfNeeded on ha-798711: state=Stopped err=<nil>
	W1002 21:22:15.248525  150075 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:22:15.250320  150075 out.go:252] * Restarting existing docker container for "ha-798711" ...
	I1002 21:22:15.250414  150075 cli_runner.go:164] Run: docker start ha-798711
	I1002 21:22:15.496577  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:15.515851  150075 kic.go:430] container "ha-798711" state is running.
	I1002 21:22:15.516281  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:15.535909  150075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:22:15.536173  150075 machine.go:93] provisionDockerMachine start ...
	I1002 21:22:15.536238  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:15.556184  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:15.556419  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:15.556431  150075 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:22:15.557155  150075 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39014->127.0.0.1:32788: read: connection reset by peer
	I1002 21:22:18.704850  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:22:18.704885  150075 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:22:18.704951  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:18.724541  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:18.724776  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:18.724790  150075 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:22:18.878693  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:22:18.878789  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:18.897725  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:18.898007  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:18.898028  150075 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:22:19.043337  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:22:19.043394  150075 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:22:19.043439  150075 ubuntu.go:190] setting up certificates
	I1002 21:22:19.043451  150075 provision.go:84] configureAuth start
	I1002 21:22:19.043518  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:19.062653  150075 provision.go:143] copyHostCerts
	I1002 21:22:19.062709  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:22:19.062765  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:22:19.062785  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:22:19.062971  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:22:19.063173  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:22:19.063210  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:22:19.063218  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:22:19.063299  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:22:19.063404  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:22:19.063433  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:22:19.063444  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:22:19.063504  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:22:19.063759  150075 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:22:19.271876  150075 provision.go:177] copyRemoteCerts
	I1002 21:22:19.271944  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:22:19.271986  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.290698  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.393792  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:22:19.393854  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:22:19.412595  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:22:19.412678  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:22:19.430937  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:22:19.431019  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:22:19.448487  150075 provision.go:87] duration metric: took 405.011038ms to configureAuth
	I1002 21:22:19.448522  150075 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:22:19.448707  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:19.448848  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.467458  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:19.467750  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:19.467775  150075 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:22:19.727855  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:22:19.727881  150075 machine.go:96] duration metric: took 4.191691329s to provisionDockerMachine
	I1002 21:22:19.727897  150075 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:22:19.727909  150075 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:22:19.727963  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:22:19.727998  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.747356  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.850943  150075 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:22:19.854607  150075 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:22:19.854646  150075 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:22:19.854661  150075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:22:19.854725  150075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:22:19.854841  150075 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:22:19.854858  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:22:19.854946  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:22:19.862484  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:22:19.879842  150075 start.go:296] duration metric: took 151.928837ms for postStartSetup
	I1002 21:22:19.879935  150075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:22:19.879987  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.898140  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.997148  150075 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:22:20.001838  150075 fix.go:56] duration metric: took 4.771191361s for fixHost
	I1002 21:22:20.001860  150075 start.go:83] releasing machines lock for "ha-798711", held for 4.771239186s
	I1002 21:22:20.001919  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:20.019213  150075 ssh_runner.go:195] Run: cat /version.json
	I1002 21:22:20.019277  150075 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:22:20.019282  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:20.019335  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:20.038496  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:20.038883  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:20.136993  150075 ssh_runner.go:195] Run: systemctl --version
	I1002 21:22:20.196437  150075 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:22:20.232211  150075 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:22:20.237052  150075 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:22:20.237111  150075 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:22:20.245114  150075 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:22:20.245140  150075 start.go:495] detecting cgroup driver to use...
	I1002 21:22:20.245171  150075 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:22:20.245228  150075 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:22:20.259645  150075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:22:20.272718  150075 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:22:20.272788  150075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:22:20.287297  150075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:22:20.300307  150075 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:22:20.378191  150075 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:22:20.461383  150075 docker.go:234] disabling docker service ...
	I1002 21:22:20.461445  150075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:22:20.475694  150075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:22:20.488378  150075 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:22:20.566714  150075 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:22:20.647020  150075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:22:20.659659  150075 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:22:20.674076  150075 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:22:20.674149  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.683499  150075 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:22:20.683576  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.692184  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.701173  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.709881  150075 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:22:20.717956  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.726833  150075 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.735549  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.744269  150075 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:22:20.751430  150075 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:22:20.758908  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:20.835963  150075 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:22:20.944567  150075 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:22:20.944647  150075 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:22:20.948732  150075 start.go:563] Will wait 60s for crictl version
	I1002 21:22:20.948898  150075 ssh_runner.go:195] Run: which crictl
	I1002 21:22:20.952464  150075 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:22:20.978453  150075 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:22:20.978527  150075 ssh_runner.go:195] Run: crio --version
	I1002 21:22:21.005771  150075 ssh_runner.go:195] Run: crio --version
	I1002 21:22:21.036027  150075 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:22:21.037322  150075 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:22:21.055243  150075 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:22:21.059527  150075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:22:21.069849  150075 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:22:21.069971  150075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:22:21.070031  150075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:22:21.101888  150075 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:22:21.101912  150075 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:22:21.101969  150075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:22:21.128815  150075 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:22:21.128841  150075 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:22:21.128849  150075 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:22:21.128946  150075 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:22:21.129008  150075 ssh_runner.go:195] Run: crio config
	I1002 21:22:21.175227  150075 cni.go:84] Creating CNI manager for ""
	I1002 21:22:21.175249  150075 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:22:21.175268  150075 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:22:21.175292  150075 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:22:21.175442  150075 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:22:21.175524  150075 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:22:21.183924  150075 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:22:21.183998  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:22:21.191710  150075 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:22:21.204157  150075 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:22:21.216847  150075 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:22:21.229180  150075 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:22:21.232602  150075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:22:21.242257  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:21.318579  150075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:22:21.344180  150075 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:22:21.344201  150075 certs.go:195] generating shared ca certs ...
	I1002 21:22:21.344221  150075 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.344381  150075 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:22:21.344455  150075 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:22:21.344471  150075 certs.go:257] generating profile certs ...
	I1002 21:22:21.344584  150075 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:22:21.344614  150075 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b
	I1002 21:22:21.344641  150075 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 21:22:21.446983  150075 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b ...
	I1002 21:22:21.447017  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b: {Name:mk6b0e2c940bd92154a82058107ebf71f1ebbb7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.447214  150075 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b ...
	I1002 21:22:21.447235  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b: {Name:mke31e93943bba4dbb3760f9ef3320f515132a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.447360  150075 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:22:21.447546  150075 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:22:21.447767  150075 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:22:21.447790  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:22:21.447813  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:22:21.447840  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:22:21.447866  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:22:21.447888  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:22:21.447910  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:22:21.447928  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:22:21.447950  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:22:21.448030  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:22:21.448076  150075 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:22:21.448093  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:22:21.448129  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:22:21.448166  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:22:21.448203  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:22:21.448267  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:22:21.448395  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.448452  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.448470  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.449026  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:22:21.466820  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:22:21.484119  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:22:21.501626  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:22:21.518887  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 21:22:21.537171  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:22:21.554236  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:22:21.570920  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:22:21.587838  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:22:21.605043  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:22:21.622260  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:22:21.640014  150075 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:22:21.652571  150075 ssh_runner.go:195] Run: openssl version
	I1002 21:22:21.658564  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:22:21.666910  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.670523  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.670582  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.703921  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:22:21.712602  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:22:21.721117  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.724989  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.725046  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.759244  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:22:21.767656  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:22:21.775895  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.779618  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.779666  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.813779  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:22:21.822067  150075 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:22:21.825883  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:22:21.866534  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:22:21.912015  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:22:21.945912  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:22:21.979879  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:22:22.013644  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:22:22.047780  150075 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:22:22.047887  150075 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:22:22.047970  150075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:22:22.075277  150075 cri.go:89] found id: ""
	I1002 21:22:22.075347  150075 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:22:22.083258  150075 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:22:22.083281  150075 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:22:22.083323  150075 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:22:22.090708  150075 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:22:22.091116  150075 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:22.091239  150075 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "ha-798711" cluster setting kubeconfig missing "ha-798711" context setting]
	I1002 21:22:22.091509  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.092053  150075 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:22:22.092484  150075 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:22:22.092513  150075 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:22:22.092520  150075 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:22:22.092527  150075 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:22:22.092533  150075 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:22:22.092541  150075 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 21:22:22.092912  150075 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:22:22.100699  150075 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 21:22:22.100750  150075 kubeadm.go:601] duration metric: took 17.449388ms to restartPrimaryControlPlane
	I1002 21:22:22.100763  150075 kubeadm.go:402] duration metric: took 53.015548ms to StartCluster
	I1002 21:22:22.100793  150075 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.100863  150075 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:22.101328  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.101526  150075 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:22:22.101599  150075 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:22:22.101708  150075 addons.go:69] Setting storage-provisioner=true in profile "ha-798711"
	I1002 21:22:22.101724  150075 addons.go:69] Setting default-storageclass=true in profile "ha-798711"
	I1002 21:22:22.101730  150075 addons.go:238] Setting addon storage-provisioner=true in "ha-798711"
	I1002 21:22:22.101761  150075 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-798711"
	I1002 21:22:22.101773  150075 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:22:22.101780  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:22.102091  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.102244  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.105321  150075 out.go:179] * Verifying Kubernetes components...
	I1002 21:22:22.106401  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:22.123447  150075 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:22:22.123864  150075 addons.go:238] Setting addon default-storageclass=true in "ha-798711"
	I1002 21:22:22.123914  150075 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:22:22.124404  150075 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:22:22.124445  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.126097  150075 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:22:22.126118  150075 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:22:22.126171  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:22.150416  150075 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:22:22.150449  150075 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:22:22.150520  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:22.152329  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:22.170571  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:22.208965  150075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:22:22.222284  150075 node_ready.go:35] waiting up to 6m0s for node "ha-798711" to be "Ready" ...
	I1002 21:22:22.262973  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:22:22.276007  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:22.318565  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.318610  150075 retry.go:31] will retry after 332.195139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:22.330944  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.330979  150075 retry.go:31] will retry after 241.604509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.573473  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:22.625933  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.625970  150075 retry.go:31] will retry after 389.818611ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.651126  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:22.705410  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.705448  150075 retry.go:31] will retry after 411.67483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.016466  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:23.071260  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.071295  150075 retry.go:31] will retry after 753.441438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.117424  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:23.170606  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.170639  150075 retry.go:31] will retry after 431.491329ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.602877  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:23.656559  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.656604  150075 retry.go:31] will retry after 803.011573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.825495  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:23.879546  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.879578  150075 retry.go:31] will retry after 1.121081737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:24.223463  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:24.459804  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:24.512250  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:24.512284  150075 retry.go:31] will retry after 747.175184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.001471  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:25.053899  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.053932  150075 retry.go:31] will retry after 1.702879471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.259962  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:25.312491  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.312520  150075 retry.go:31] will retry after 2.01426178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:26.223587  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:26.757048  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:26.809444  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:26.809483  150075 retry.go:31] will retry after 2.829127733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:27.327650  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:27.381974  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:27.382001  150075 retry.go:31] will retry after 1.605113332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:28.722986  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:28.987350  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:29.041150  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.041187  150075 retry.go:31] will retry after 4.091564679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.639405  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:29.692785  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.692826  150075 retry.go:31] will retry after 2.435801898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:30.723515  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:32.129391  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:32.183937  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:32.183967  150075 retry.go:31] will retry after 5.528972353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:32.723587  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:33.133098  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:33.186015  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:33.186053  150075 retry.go:31] will retry after 4.643721978s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:34.723860  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:37.223085  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:37.713796  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:37.767671  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.767703  150075 retry.go:31] will retry after 3.727470036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.830928  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:37.886261  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.886294  150075 retry.go:31] will retry after 13.888557881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:39.223775  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:41.495407  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:41.550433  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:41.550498  150075 retry.go:31] will retry after 13.30056895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:41.723179  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:43.723398  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:45.723862  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:48.223047  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:50.223396  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:51.775552  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:51.828821  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:51.828857  150075 retry.go:31] will retry after 14.281203927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:52.723079  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:54.723640  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:54.851897  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:54.905538  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:54.905568  150075 retry.go:31] will retry after 21.127211543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:57.223010  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:59.723028  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:01.723282  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:04.222978  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:06.110868  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:23:06.164215  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:06.164246  150075 retry.go:31] will retry after 25.963131147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:06.223894  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:08.723805  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:11.223497  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:13.723285  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:16.033300  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:23:16.087245  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:16.087290  150075 retry.go:31] will retry after 24.207208905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:16.222891  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:18.223511  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:20.723507  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:23.223576  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:25.723259  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:27.723840  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:30.223437  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:32.127869  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:23:32.182828  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:32.182857  150075 retry.go:31] will retry after 38.777289106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:32.723619  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:35.223273  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:37.723255  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:40.223157  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:40.295431  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:23:40.348642  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:40.348800  150075 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 21:23:42.223230  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:44.223799  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:46.723897  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:49.223246  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:51.722939  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:53.723114  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:55.723163  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:58.222999  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:00.722961  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:02.723843  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:05.223568  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:07.723531  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:10.223448  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:24:10.961153  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:24:11.016917  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:24:11.017060  150075 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 21:24:11.019773  150075 out.go:179] * Enabled addons: 
	I1002 21:24:11.021818  150075 addons.go:514] duration metric: took 1m48.920205848s for enable addons: enabled=[]
	W1002 21:24:12.723331  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:15.223307  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:17.723001  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:19.723516  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:21.723927  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:24.223154  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:26.223282  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:28.723217  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:30.723311  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:32.723577  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:35.223036  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:37.723107  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:40.223161  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:42.223328  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:44.723125  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:46.723240  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:49.223138  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:51.223190  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:53.723144  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:55.723182  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:58.222963  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:00.223030  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:02.223351  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:04.723125  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:07.222864  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:09.723830  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:12.223189  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:14.722887  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:17.222842  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:19.223820  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:21.723910  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:24.223044  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:26.722924  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:29.222844  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:31.223182  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:33.223469  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:35.223850  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:37.223941  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:39.723890  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:42.223202  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:44.723088  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:46.723135  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:49.222868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:51.223816  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:53.723191  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:56.223164  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:58.722931  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:00.723033  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:02.723294  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:05.223262  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:07.723200  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:10.223269  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:12.223379  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:14.223724  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:16.722876  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:18.723868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:21.223245  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:23.223816  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:25.723025  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:28.222964  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:30.223266  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:32.223312  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:34.723126  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:36.723233  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:39.223187  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:41.722991  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:43.723330  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:46.223283  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:48.723098  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:50.723295  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:52.723368  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:55.223397  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:57.723073  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:00.223143  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:02.223368  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:04.723206  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:07.223122  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:09.722963  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:11.723120  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:13.723253  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:16.223315  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:18.723151  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:20.723332  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:22.723492  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:25.223778  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:27.223886  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:29.722952  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:31.723111  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:33.723288  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:35.723349  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:38.222868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:40.223010  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:42.223219  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:44.723168  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:47.223089  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:49.722908  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:51.723048  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:53.723217  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:56.223185  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:58.723069  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:00.723272  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:02.723378  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:05.223321  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:07.722992  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:10.222865  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:12.722875  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:15.223071  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:17.722867  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:19.723806  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:22.222870  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1002 21:28:22.222917  150075 node_ready.go:38] duration metric: took 6m0.000594512s for node "ha-798711" to be "Ready" ...
	I1002 21:28:22.225366  150075 out.go:203] 
	W1002 21:28:22.227274  150075 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 21:28:22.227288  150075 out.go:285] * 
	* 
	W1002 21:28:22.228925  150075 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:28:22.230006  150075 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-798711 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 150286,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:22:15.276299903Z",
	            "FinishedAt": "2025-10-02T21:22:14.109000009Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfc5898f1fb70247184429418ec47913fc23394ca8038e3769c9426461a4d69e",
	            "SandboxKey": "/var/run/docker/netns/cfc5898f1fb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:38:19:25:8d:2d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "94b8c1eb9ead0eb293cb635b12ce5567ff3da80e11af8a8897a1fe25f10ab496",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 2 (293.052745ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-798711 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml            │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- rollout status deployment/busybox                      │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node add --alsologtostderr -v 5                                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node stop m02 --alsologtostderr -v 5                              │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node start m02 --alsologtostderr -v 5                             │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                  │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                       │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │ 02 Oct 25 21:22 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5                          │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                  │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:22:15
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:22:15.033227  150075 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:22:15.033502  150075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:15.033514  150075 out.go:374] Setting ErrFile to fd 2...
	I1002 21:22:15.033519  150075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:15.033759  150075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:22:15.034237  150075 out.go:368] Setting JSON to false
	I1002 21:22:15.035218  150075 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11076,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:22:15.035319  150075 start.go:140] virtualization: kvm guest
	I1002 21:22:15.037453  150075 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:22:15.038781  150075 notify.go:220] Checking for updates...
	I1002 21:22:15.038868  150075 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:22:15.040220  150075 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:22:15.041802  150075 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:15.043133  150075 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:22:15.044244  150075 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:22:15.047976  150075 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:22:15.049912  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:15.050054  150075 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:22:15.074981  150075 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:22:15.075111  150075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:15.135266  150075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:22:15.124689773 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:22:15.135396  150075 docker.go:318] overlay module found
	I1002 21:22:15.137632  150075 out.go:179] * Using the docker driver based on existing profile
	I1002 21:22:15.139159  150075 start.go:304] selected driver: docker
	I1002 21:22:15.139180  150075 start.go:924] validating driver "docker" against &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:22:15.139298  150075 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:22:15.139392  150075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:15.200879  150075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:22:15.189950344 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:22:15.201570  150075 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:22:15.201600  150075 cni.go:84] Creating CNI manager for ""
	I1002 21:22:15.201660  150075 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:22:15.201704  150075 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 21:22:15.204229  150075 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:22:15.206112  150075 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:22:15.207484  150075 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:22:15.208801  150075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:22:15.208851  150075 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:22:15.208877  150075 cache.go:58] Caching tarball of preloaded images
	I1002 21:22:15.208924  150075 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:22:15.208992  150075 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:22:15.209009  150075 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:22:15.209155  150075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:22:15.230453  150075 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:22:15.230479  150075 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:22:15.230497  150075 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:22:15.230539  150075 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:22:15.230610  150075 start.go:364] duration metric: took 49.005µs to acquireMachinesLock for "ha-798711"
	I1002 21:22:15.230632  150075 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:22:15.230641  150075 fix.go:54] fixHost starting: 
	I1002 21:22:15.230913  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:15.248494  150075 fix.go:112] recreateIfNeeded on ha-798711: state=Stopped err=<nil>
	W1002 21:22:15.248525  150075 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:22:15.250320  150075 out.go:252] * Restarting existing docker container for "ha-798711" ...
	I1002 21:22:15.250414  150075 cli_runner.go:164] Run: docker start ha-798711
	I1002 21:22:15.496577  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:15.515851  150075 kic.go:430] container "ha-798711" state is running.
	I1002 21:22:15.516281  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:15.535909  150075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:22:15.536173  150075 machine.go:93] provisionDockerMachine start ...
	I1002 21:22:15.536238  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:15.556184  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:15.556419  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:15.556431  150075 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:22:15.557155  150075 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39014->127.0.0.1:32788: read: connection reset by peer
	I1002 21:22:18.704850  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:22:18.704885  150075 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:22:18.704951  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:18.724541  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:18.724776  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:18.724790  150075 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:22:18.878693  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:22:18.878789  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:18.897725  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:18.898007  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:18.898028  150075 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:22:19.043337  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:22:19.043394  150075 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:22:19.043439  150075 ubuntu.go:190] setting up certificates
	I1002 21:22:19.043451  150075 provision.go:84] configureAuth start
	I1002 21:22:19.043518  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:19.062653  150075 provision.go:143] copyHostCerts
	I1002 21:22:19.062709  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:22:19.062765  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:22:19.062785  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:22:19.062971  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:22:19.063173  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:22:19.063210  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:22:19.063218  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:22:19.063299  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:22:19.063404  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:22:19.063433  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:22:19.063444  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:22:19.063504  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:22:19.063759  150075 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:22:19.271876  150075 provision.go:177] copyRemoteCerts
	I1002 21:22:19.271944  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:22:19.271986  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.290698  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.393792  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:22:19.393854  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:22:19.412595  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:22:19.412678  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:22:19.430937  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:22:19.431019  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:22:19.448487  150075 provision.go:87] duration metric: took 405.011038ms to configureAuth
	I1002 21:22:19.448522  150075 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:22:19.448707  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:19.448848  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.467458  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:19.467750  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:19.467775  150075 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:22:19.727855  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:22:19.727881  150075 machine.go:96] duration metric: took 4.191691329s to provisionDockerMachine
	I1002 21:22:19.727897  150075 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:22:19.727909  150075 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:22:19.727963  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:22:19.727998  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.747356  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.850943  150075 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:22:19.854607  150075 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:22:19.854646  150075 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:22:19.854661  150075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:22:19.854725  150075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:22:19.854841  150075 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:22:19.854858  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:22:19.854946  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:22:19.862484  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:22:19.879842  150075 start.go:296] duration metric: took 151.928837ms for postStartSetup
	I1002 21:22:19.879935  150075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:22:19.879987  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.898140  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.997148  150075 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:22:20.001838  150075 fix.go:56] duration metric: took 4.771191361s for fixHost
	I1002 21:22:20.001860  150075 start.go:83] releasing machines lock for "ha-798711", held for 4.771239186s
	I1002 21:22:20.001919  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:20.019213  150075 ssh_runner.go:195] Run: cat /version.json
	I1002 21:22:20.019277  150075 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:22:20.019282  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:20.019335  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:20.038496  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:20.038883  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:20.136993  150075 ssh_runner.go:195] Run: systemctl --version
	I1002 21:22:20.196437  150075 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:22:20.232211  150075 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:22:20.237052  150075 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:22:20.237111  150075 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:22:20.245114  150075 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:22:20.245140  150075 start.go:495] detecting cgroup driver to use...
	I1002 21:22:20.245171  150075 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:22:20.245228  150075 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:22:20.259645  150075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:22:20.272718  150075 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:22:20.272788  150075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:22:20.287297  150075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:22:20.300307  150075 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:22:20.378191  150075 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:22:20.461383  150075 docker.go:234] disabling docker service ...
	I1002 21:22:20.461445  150075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:22:20.475694  150075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:22:20.488378  150075 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:22:20.566714  150075 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:22:20.647020  150075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:22:20.659659  150075 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:22:20.674076  150075 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:22:20.674149  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.683499  150075 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:22:20.683576  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.692184  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.701173  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.709881  150075 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:22:20.717956  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.726833  150075 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.735549  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.744269  150075 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:22:20.751430  150075 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:22:20.758908  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:20.835963  150075 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:22:20.944567  150075 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:22:20.944647  150075 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:22:20.948732  150075 start.go:563] Will wait 60s for crictl version
	I1002 21:22:20.948898  150075 ssh_runner.go:195] Run: which crictl
	I1002 21:22:20.952464  150075 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:22:20.978453  150075 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:22:20.978527  150075 ssh_runner.go:195] Run: crio --version
	I1002 21:22:21.005771  150075 ssh_runner.go:195] Run: crio --version
	I1002 21:22:21.036027  150075 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:22:21.037322  150075 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:22:21.055243  150075 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:22:21.059527  150075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:22:21.069849  150075 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:22:21.069971  150075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:22:21.070031  150075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:22:21.101888  150075 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:22:21.101912  150075 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:22:21.101969  150075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:22:21.128815  150075 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:22:21.128841  150075 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:22:21.128849  150075 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:22:21.128946  150075 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:22:21.129008  150075 ssh_runner.go:195] Run: crio config
	I1002 21:22:21.175227  150075 cni.go:84] Creating CNI manager for ""
	I1002 21:22:21.175249  150075 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:22:21.175268  150075 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:22:21.175292  150075 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:22:21.175442  150075 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:22:21.175524  150075 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:22:21.183924  150075 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:22:21.183998  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:22:21.191710  150075 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:22:21.204157  150075 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:22:21.216847  150075 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:22:21.229180  150075 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:22:21.232602  150075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:22:21.242257  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:21.318579  150075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:22:21.344180  150075 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:22:21.344201  150075 certs.go:195] generating shared ca certs ...
	I1002 21:22:21.344221  150075 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.344381  150075 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:22:21.344455  150075 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:22:21.344471  150075 certs.go:257] generating profile certs ...
	I1002 21:22:21.344584  150075 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:22:21.344614  150075 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b
	I1002 21:22:21.344641  150075 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 21:22:21.446983  150075 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b ...
	I1002 21:22:21.447017  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b: {Name:mk6b0e2c940bd92154a82058107ebf71f1ebbb7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.447214  150075 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b ...
	I1002 21:22:21.447235  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b: {Name:mke31e93943bba4dbb3760f9ef3320f515132a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.447360  150075 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:22:21.447546  150075 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:22:21.447767  150075 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:22:21.447790  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:22:21.447813  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:22:21.447840  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:22:21.447866  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:22:21.447888  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:22:21.447910  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:22:21.447928  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:22:21.447950  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:22:21.448030  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:22:21.448076  150075 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:22:21.448093  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:22:21.448129  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:22:21.448166  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:22:21.448203  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:22:21.448267  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:22:21.448395  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.448452  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.448470  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.449026  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:22:21.466820  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:22:21.484119  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:22:21.501626  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:22:21.518887  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 21:22:21.537171  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:22:21.554236  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:22:21.570920  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:22:21.587838  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:22:21.605043  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:22:21.622260  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:22:21.640014  150075 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:22:21.652571  150075 ssh_runner.go:195] Run: openssl version
	I1002 21:22:21.658564  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:22:21.666910  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.670523  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.670582  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.703921  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:22:21.712602  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:22:21.721117  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.724989  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.725046  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.759244  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:22:21.767656  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:22:21.775895  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.779618  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.779666  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.813779  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:22:21.822067  150075 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:22:21.825883  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:22:21.866534  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:22:21.912015  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:22:21.945912  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:22:21.979879  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:22:22.013644  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:22:22.047780  150075 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:22:22.047887  150075 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:22:22.047970  150075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:22:22.075277  150075 cri.go:89] found id: ""
	I1002 21:22:22.075347  150075 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:22:22.083258  150075 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:22:22.083281  150075 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:22:22.083323  150075 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:22:22.090708  150075 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:22:22.091116  150075 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:22.091239  150075 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "ha-798711" cluster setting kubeconfig missing "ha-798711" context setting]
	I1002 21:22:22.091509  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.092053  150075 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:22:22.092484  150075 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:22:22.092513  150075 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:22:22.092520  150075 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:22:22.092527  150075 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:22:22.092533  150075 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:22:22.092541  150075 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 21:22:22.092912  150075 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:22:22.100699  150075 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 21:22:22.100750  150075 kubeadm.go:601] duration metric: took 17.449388ms to restartPrimaryControlPlane
	I1002 21:22:22.100763  150075 kubeadm.go:402] duration metric: took 53.015548ms to StartCluster
	I1002 21:22:22.100793  150075 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.100863  150075 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:22.101328  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.101526  150075 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:22:22.101599  150075 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:22:22.101708  150075 addons.go:69] Setting storage-provisioner=true in profile "ha-798711"
	I1002 21:22:22.101724  150075 addons.go:69] Setting default-storageclass=true in profile "ha-798711"
	I1002 21:22:22.101730  150075 addons.go:238] Setting addon storage-provisioner=true in "ha-798711"
	I1002 21:22:22.101761  150075 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-798711"
	I1002 21:22:22.101773  150075 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:22:22.101780  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:22.102091  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.102244  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.105321  150075 out.go:179] * Verifying Kubernetes components...
	I1002 21:22:22.106401  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:22.123447  150075 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:22:22.123864  150075 addons.go:238] Setting addon default-storageclass=true in "ha-798711"
	I1002 21:22:22.123914  150075 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:22:22.124404  150075 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:22:22.124445  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.126097  150075 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:22:22.126118  150075 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:22:22.126171  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:22.150416  150075 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:22:22.150449  150075 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:22:22.150520  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:22.152329  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:22.170571  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:22.208965  150075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:22:22.222284  150075 node_ready.go:35] waiting up to 6m0s for node "ha-798711" to be "Ready" ...
	I1002 21:22:22.262973  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:22:22.276007  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:22.318565  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.318610  150075 retry.go:31] will retry after 332.195139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:22.330944  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.330979  150075 retry.go:31] will retry after 241.604509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.573473  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:22.625933  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.625970  150075 retry.go:31] will retry after 389.818611ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.651126  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:22.705410  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.705448  150075 retry.go:31] will retry after 411.67483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.016466  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:23.071260  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.071295  150075 retry.go:31] will retry after 753.441438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.117424  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:23.170606  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.170639  150075 retry.go:31] will retry after 431.491329ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.602877  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:23.656559  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.656604  150075 retry.go:31] will retry after 803.011573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.825495  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:23.879546  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.879578  150075 retry.go:31] will retry after 1.121081737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:24.223463  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:24.459804  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:24.512250  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:24.512284  150075 retry.go:31] will retry after 747.175184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.001471  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:25.053899  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.053932  150075 retry.go:31] will retry after 1.702879471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.259962  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:25.312491  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.312520  150075 retry.go:31] will retry after 2.01426178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:26.223587  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:26.757048  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:26.809444  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:26.809483  150075 retry.go:31] will retry after 2.829127733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:27.327650  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:27.381974  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:27.382001  150075 retry.go:31] will retry after 1.605113332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:28.722986  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:28.987350  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:29.041150  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.041187  150075 retry.go:31] will retry after 4.091564679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.639405  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:29.692785  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.692826  150075 retry.go:31] will retry after 2.435801898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:30.723515  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:32.129391  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:32.183937  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:32.183967  150075 retry.go:31] will retry after 5.528972353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:32.723587  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:33.133098  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:33.186015  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:33.186053  150075 retry.go:31] will retry after 4.643721978s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:34.723860  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:37.223085  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:37.713796  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:37.767671  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.767703  150075 retry.go:31] will retry after 3.727470036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.830928  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:37.886261  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.886294  150075 retry.go:31] will retry after 13.888557881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:39.223775  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:41.495407  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:41.550433  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:41.550498  150075 retry.go:31] will retry after 13.30056895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:41.723179  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:43.723398  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:45.723862  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:48.223047  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:50.223396  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:51.775552  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:51.828821  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:51.828857  150075 retry.go:31] will retry after 14.281203927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:52.723079  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:54.723640  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:54.851897  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:54.905538  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:54.905568  150075 retry.go:31] will retry after 21.127211543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:57.223010  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:59.723028  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:01.723282  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:04.222978  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:06.110868  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:23:06.164215  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:06.164246  150075 retry.go:31] will retry after 25.963131147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:06.223894  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:08.723805  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:11.223497  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:13.723285  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:16.033300  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:23:16.087245  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:16.087290  150075 retry.go:31] will retry after 24.207208905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:16.222891  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:18.223511  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:20.723507  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:23.223576  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:25.723259  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:27.723840  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:30.223437  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:32.127869  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:23:32.182828  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:32.182857  150075 retry.go:31] will retry after 38.777289106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:32.723619  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:35.223273  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:37.723255  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:40.223157  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:40.295431  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:23:40.348642  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:40.348800  150075 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 21:23:42.223230  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:44.223799  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:46.723897  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:49.223246  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:51.722939  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:53.723114  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:55.723163  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:58.222999  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:00.722961  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:02.723843  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:05.223568  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:07.723531  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:10.223448  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:24:10.961153  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:24:11.016917  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:24:11.017060  150075 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 21:24:11.019773  150075 out.go:179] * Enabled addons: 
	I1002 21:24:11.021818  150075 addons.go:514] duration metric: took 1m48.920205848s for enable addons: enabled=[]
	W1002 21:24:12.723331  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:15.223307  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:17.723001  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:19.723516  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:21.723927  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:24.223154  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:26.223282  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:28.723217  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:30.723311  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:32.723577  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:35.223036  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:37.723107  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:40.223161  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:42.223328  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:44.723125  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:46.723240  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:49.223138  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:51.223190  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:53.723144  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:55.723182  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:58.222963  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:00.223030  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:02.223351  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:04.723125  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:07.222864  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:09.723830  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:12.223189  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:14.722887  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:17.222842  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:19.223820  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:21.723910  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:24.223044  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:26.722924  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:29.222844  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:31.223182  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:33.223469  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:35.223850  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:37.223941  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:39.723890  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:42.223202  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:44.723088  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:46.723135  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:49.222868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:51.223816  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:53.723191  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:56.223164  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:58.722931  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:00.723033  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:02.723294  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:05.223262  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:07.723200  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:10.223269  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:12.223379  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:14.223724  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:16.722876  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:18.723868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:21.223245  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:23.223816  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:25.723025  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:28.222964  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:30.223266  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:32.223312  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:34.723126  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:36.723233  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:39.223187  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:41.722991  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:43.723330  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:46.223283  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:48.723098  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:50.723295  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:52.723368  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:55.223397  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:57.723073  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:00.223143  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:02.223368  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:04.723206  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:07.223122  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:09.722963  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:11.723120  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:13.723253  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:16.223315  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:18.723151  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:20.723332  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:22.723492  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:25.223778  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:27.223886  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:29.722952  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:31.723111  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:33.723288  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:35.723349  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:38.222868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:40.223010  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:42.223219  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:44.723168  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:47.223089  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:49.722908  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:51.723048  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:53.723217  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:56.223185  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:58.723069  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:00.723272  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:02.723378  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:05.223321  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:07.722992  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:10.222865  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:12.722875  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:15.223071  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:17.722867  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:19.723806  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:22.222870  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1002 21:28:22.222917  150075 node_ready.go:38] duration metric: took 6m0.000594512s for node "ha-798711" to be "Ready" ...
	I1002 21:28:22.225366  150075 out.go:203] 
	W1002 21:28:22.227274  150075 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 21:28:22.227288  150075 out.go:285] * 
	W1002 21:28:22.228925  150075 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:28:22.230006  150075 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:28:14 ha-798711 crio[516]: time="2025-10-02T21:28:14.454841418Z" level=info msg="createCtr: removing container dfe5436a72977e8acca5eb93e0764bb4010729164b9da82d5992c660bf4b737b" id=1a6b62b8-a98a-4938-b6bc-20e1da093e61 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:14 ha-798711 crio[516]: time="2025-10-02T21:28:14.45487241Z" level=info msg="createCtr: deleting container dfe5436a72977e8acca5eb93e0764bb4010729164b9da82d5992c660bf4b737b from storage" id=1a6b62b8-a98a-4938-b6bc-20e1da093e61 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:14 ha-798711 crio[516]: time="2025-10-02T21:28:14.456820952Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=1a6b62b8-a98a-4938-b6bc-20e1da093e61 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.432708752Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=88760920-763c-4ed8-a743-275064ab04a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.433672104Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=50649208-e7af-40fb-aea6-a373466c94fe name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.43463411Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-798711/kube-apiserver" id=ba9e1481-8bb8-4257-a4dc-19339594beab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.434883905Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.438138032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.438599682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.456927467Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ba9e1481-8bb8-4257-a4dc-19339594beab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.458351525Z" level=info msg="createCtr: deleting container ID 2d895c7715fba47e1668d30a174d36e0dd5ac4e75af7e3971c3a0d90d2913a3c from idIndex" id=ba9e1481-8bb8-4257-a4dc-19339594beab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.458385588Z" level=info msg="createCtr: removing container 2d895c7715fba47e1668d30a174d36e0dd5ac4e75af7e3971c3a0d90d2913a3c" id=ba9e1481-8bb8-4257-a4dc-19339594beab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.458420695Z" level=info msg="createCtr: deleting container 2d895c7715fba47e1668d30a174d36e0dd5ac4e75af7e3971c3a0d90d2913a3c from storage" id=ba9e1481-8bb8-4257-a4dc-19339594beab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.460494845Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-798711_kube-system_4a40991d7a1715abba4b4bde50171ddc_0" id=ba9e1481-8bb8-4257-a4dc-19339594beab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.432686474Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=da8ba01b-9f07-457b-ad29-06a2def696de name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.433649162Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=69ecacb2-c8b3-45ad-8a5c-517fbf193e68 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.434633944Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.434937813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.439279728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.439711047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.454307287Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.455686809Z" level=info msg="createCtr: deleting container ID 2adae903f4b201a327a48baffe455ef0c7bddff88a8f857ea028ffc09d17ac44 from idIndex" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.455723879Z" level=info msg="createCtr: removing container 2adae903f4b201a327a48baffe455ef0c7bddff88a8f857ea028ffc09d17ac44" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.455772875Z" level=info msg="createCtr: deleting container 2adae903f4b201a327a48baffe455ef0c7bddff88a8f857ea028ffc09d17ac44 from storage" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.457884491Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:28:23.198262    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:23.198900    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:23.200469    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:23.200911    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:23.202429    1991 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:28:23 up  3:10,  0 user,  load average: 0.00, 0.02, 0.08
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:28:14 ha-798711 kubelet[664]: E1002 21:28:14.457236     664 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:28:14 ha-798711 kubelet[664]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:14 ha-798711 kubelet[664]:  > logger="UnhandledError"
	Oct 02 21:28:14 ha-798711 kubelet[664]: E1002 21:28:14.457267     664 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	Oct 02 21:28:16 ha-798711 kubelet[664]: E1002 21:28:16.358663     664 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac97c98cb5418  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:22:21.418189848 +0000 UTC m=+0.072153483,LastTimestamp:2025-10-02 21:22:21.418189848 +0000 UTC m=+0.072153483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:28:17 ha-798711 kubelet[664]: E1002 21:28:17.065807     664 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:28:17 ha-798711 kubelet[664]: I1002 21:28:17.237493     664 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:28:17 ha-798711 kubelet[664]: E1002 21:28:17.237928     664 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:28:18 ha-798711 kubelet[664]: E1002 21:28:18.432190     664 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:28:18 ha-798711 kubelet[664]: E1002 21:28:18.460791     664 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:28:18 ha-798711 kubelet[664]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:18 ha-798711 kubelet[664]:  > podSandboxID="070852ea1215d81f1475b26ef3649aac90a5ff4592155f15c001f51f44edae5c"
	Oct 02 21:28:18 ha-798711 kubelet[664]: E1002 21:28:18.460897     664 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:28:18 ha-798711 kubelet[664]:         container kube-apiserver start failed in pod kube-apiserver-ha-798711_kube-system(4a40991d7a1715abba4b4bde50171ddc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:18 ha-798711 kubelet[664]:  > logger="UnhandledError"
	Oct 02 21:28:18 ha-798711 kubelet[664]: E1002 21:28:18.460930     664 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-798711" podUID="4a40991d7a1715abba4b4bde50171ddc"
	Oct 02 21:28:21 ha-798711 kubelet[664]: E1002 21:28:21.446166     664 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-798711\" not found"
	Oct 02 21:28:22 ha-798711 kubelet[664]: E1002 21:28:22.432216     664 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:28:22 ha-798711 kubelet[664]: E1002 21:28:22.458200     664 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:28:22 ha-798711 kubelet[664]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:22 ha-798711 kubelet[664]:  > podSandboxID="8e469375d261403293181d2e6c93e44842cb95d59dfe04c34347b112296eedcd"
	Oct 02 21:28:22 ha-798711 kubelet[664]: E1002 21:28:22.458324     664 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:28:22 ha-798711 kubelet[664]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:22 ha-798711 kubelet[664]:  > logger="UnhandledError"
	Oct 02 21:28:22 ha-798711 kubelet[664]: E1002 21:28:22.458364     664 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 2 (299.521908ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (369.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 node delete m03 --alsologtostderr -v 5: exit status 103 (256.524617ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-798711 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-798711"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:28:23.649807  154152 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:28:23.650081  154152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:23.650092  154152 out.go:374] Setting ErrFile to fd 2...
	I1002 21:28:23.650096  154152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:23.650311  154152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:28:23.650605  154152 mustload.go:65] Loading cluster: ha-798711
	I1002 21:28:23.650943  154152 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:23.651303  154152 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:23.668966  154152 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:23.669247  154152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:23.723823  154152 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:28:23.712754538 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:23.723943  154152 api_server.go:166] Checking apiserver status ...
	I1002 21:28:23.723988  154152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:28:23.724024  154152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:23.741417  154152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	W1002 21:28:23.844643  154152 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:28:23.850554  154152 out.go:179] * The control-plane node ha-798711 apiserver is not running: (state=Stopped)
	I1002 21:28:23.851938  154152 out.go:179]   To start a cluster, run: "minikube start -p ha-798711"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-amd64 -p ha-798711 node delete m03 --alsologtostderr -v 5": exit status 103
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5: exit status 2 (286.159461ms)

                                                
                                                
-- stdout --
	ha-798711
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:28:23.898698  154249 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:28:23.899003  154249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:23.899016  154249 out.go:374] Setting ErrFile to fd 2...
	I1002 21:28:23.899021  154249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:23.899256  154249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:28:23.899478  154249 out.go:368] Setting JSON to false
	I1002 21:28:23.899512  154249 mustload.go:65] Loading cluster: ha-798711
	I1002 21:28:23.899638  154249 notify.go:220] Checking for updates...
	I1002 21:28:23.899963  154249 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:23.899980  154249 status.go:174] checking status of ha-798711 ...
	I1002 21:28:23.900426  154249 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:23.918786  154249 status.go:371] ha-798711 host status = "Running" (err=<nil>)
	I1002 21:28:23.918823  154249 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:23.919115  154249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:23.936962  154249 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:23.937209  154249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:28:23.937243  154249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:23.954634  154249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:24.053070  154249 ssh_runner.go:195] Run: systemctl --version
	I1002 21:28:24.059263  154249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:28:24.071218  154249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:24.127017  154249 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:28:24.116162813 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:24.127552  154249 kubeconfig.go:125] found "ha-798711" server: "https://192.168.49.2:8443"
	I1002 21:28:24.127585  154249 api_server.go:166] Checking apiserver status ...
	I1002 21:28:24.127622  154249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 21:28:24.138338  154249 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:28:24.138359  154249 status.go:463] ha-798711 apiserver status = Running (err=<nil>)
	I1002 21:28:24.138370  154249 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 150286,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:22:15.276299903Z",
	            "FinishedAt": "2025-10-02T21:22:14.109000009Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfc5898f1fb70247184429418ec47913fc23394ca8038e3769c9426461a4d69e",
	            "SandboxKey": "/var/run/docker/netns/cfc5898f1fb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:38:19:25:8d:2d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "94b8c1eb9ead0eb293cb635b12ce5567ff3da80e11af8a8897a1fe25f10ab496",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 2 (288.953741ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-798711 kubectl -- rollout status deployment/busybox                      │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node add --alsologtostderr -v 5                                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node stop m02 --alsologtostderr -v 5                              │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node start m02 --alsologtostderr -v 5                             │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                  │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                       │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │ 02 Oct 25 21:22 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5                          │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                  │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ node    │ ha-798711 node delete m03 --alsologtostderr -v 5                            │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:22:15
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:22:15.033227  150075 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:22:15.033502  150075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:15.033514  150075 out.go:374] Setting ErrFile to fd 2...
	I1002 21:22:15.033519  150075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:15.033759  150075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:22:15.034237  150075 out.go:368] Setting JSON to false
	I1002 21:22:15.035218  150075 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11076,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:22:15.035319  150075 start.go:140] virtualization: kvm guest
	I1002 21:22:15.037453  150075 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:22:15.038781  150075 notify.go:220] Checking for updates...
	I1002 21:22:15.038868  150075 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:22:15.040220  150075 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:22:15.041802  150075 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:15.043133  150075 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:22:15.044244  150075 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:22:15.047976  150075 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:22:15.049912  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:15.050054  150075 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:22:15.074981  150075 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:22:15.075111  150075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:15.135266  150075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:22:15.124689773 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:22:15.135396  150075 docker.go:318] overlay module found
	I1002 21:22:15.137632  150075 out.go:179] * Using the docker driver based on existing profile
	I1002 21:22:15.139159  150075 start.go:304] selected driver: docker
	I1002 21:22:15.139180  150075 start.go:924] validating driver "docker" against &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:22:15.139298  150075 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:22:15.139392  150075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:15.200879  150075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:22:15.189950344 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:22:15.201570  150075 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:22:15.201600  150075 cni.go:84] Creating CNI manager for ""
	I1002 21:22:15.201660  150075 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:22:15.201704  150075 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 21:22:15.204229  150075 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:22:15.206112  150075 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:22:15.207484  150075 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:22:15.208801  150075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:22:15.208851  150075 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:22:15.208877  150075 cache.go:58] Caching tarball of preloaded images
	I1002 21:22:15.208924  150075 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:22:15.208992  150075 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:22:15.209009  150075 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:22:15.209155  150075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:22:15.230453  150075 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:22:15.230479  150075 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:22:15.230497  150075 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:22:15.230539  150075 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:22:15.230610  150075 start.go:364] duration metric: took 49.005µs to acquireMachinesLock for "ha-798711"
	I1002 21:22:15.230632  150075 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:22:15.230641  150075 fix.go:54] fixHost starting: 
	I1002 21:22:15.230913  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:15.248494  150075 fix.go:112] recreateIfNeeded on ha-798711: state=Stopped err=<nil>
	W1002 21:22:15.248525  150075 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:22:15.250320  150075 out.go:252] * Restarting existing docker container for "ha-798711" ...
	I1002 21:22:15.250414  150075 cli_runner.go:164] Run: docker start ha-798711
	I1002 21:22:15.496577  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:15.515851  150075 kic.go:430] container "ha-798711" state is running.
	I1002 21:22:15.516281  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:15.535909  150075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:22:15.536173  150075 machine.go:93] provisionDockerMachine start ...
	I1002 21:22:15.536238  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:15.556184  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:15.556419  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:15.556431  150075 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:22:15.557155  150075 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39014->127.0.0.1:32788: read: connection reset by peer
	I1002 21:22:18.704850  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:22:18.704885  150075 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:22:18.704951  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:18.724541  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:18.724776  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:18.724790  150075 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:22:18.878693  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:22:18.878789  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:18.897725  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:18.898007  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:18.898028  150075 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:22:19.043337  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:22:19.043394  150075 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:22:19.043439  150075 ubuntu.go:190] setting up certificates
	I1002 21:22:19.043451  150075 provision.go:84] configureAuth start
	I1002 21:22:19.043518  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:19.062653  150075 provision.go:143] copyHostCerts
	I1002 21:22:19.062709  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:22:19.062765  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:22:19.062785  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:22:19.062971  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:22:19.063173  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:22:19.063210  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:22:19.063218  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:22:19.063299  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:22:19.063404  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:22:19.063433  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:22:19.063444  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:22:19.063504  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:22:19.063759  150075 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:22:19.271876  150075 provision.go:177] copyRemoteCerts
	I1002 21:22:19.271944  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:22:19.271986  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.290698  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.393792  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:22:19.393854  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:22:19.412595  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:22:19.412678  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:22:19.430937  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:22:19.431019  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:22:19.448487  150075 provision.go:87] duration metric: took 405.011038ms to configureAuth
	I1002 21:22:19.448522  150075 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:22:19.448707  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:19.448848  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.467458  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:19.467750  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:19.467775  150075 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:22:19.727855  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:22:19.727881  150075 machine.go:96] duration metric: took 4.191691329s to provisionDockerMachine
	I1002 21:22:19.727897  150075 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:22:19.727909  150075 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:22:19.727963  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:22:19.727998  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.747356  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.850943  150075 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:22:19.854607  150075 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:22:19.854646  150075 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:22:19.854661  150075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:22:19.854725  150075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:22:19.854841  150075 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:22:19.854858  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:22:19.854946  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:22:19.862484  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:22:19.879842  150075 start.go:296] duration metric: took 151.928837ms for postStartSetup
	I1002 21:22:19.879935  150075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:22:19.879987  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.898140  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.997148  150075 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:22:20.001838  150075 fix.go:56] duration metric: took 4.771191361s for fixHost
	I1002 21:22:20.001860  150075 start.go:83] releasing machines lock for "ha-798711", held for 4.771239186s
	I1002 21:22:20.001919  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:20.019213  150075 ssh_runner.go:195] Run: cat /version.json
	I1002 21:22:20.019277  150075 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:22:20.019282  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:20.019335  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:20.038496  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:20.038883  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:20.136993  150075 ssh_runner.go:195] Run: systemctl --version
	I1002 21:22:20.196437  150075 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:22:20.232211  150075 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:22:20.237052  150075 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:22:20.237111  150075 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:22:20.245114  150075 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:22:20.245140  150075 start.go:495] detecting cgroup driver to use...
	I1002 21:22:20.245171  150075 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:22:20.245228  150075 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:22:20.259645  150075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:22:20.272718  150075 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:22:20.272788  150075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:22:20.287297  150075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:22:20.300307  150075 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:22:20.378191  150075 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:22:20.461383  150075 docker.go:234] disabling docker service ...
	I1002 21:22:20.461445  150075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:22:20.475694  150075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:22:20.488378  150075 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:22:20.566714  150075 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:22:20.647020  150075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:22:20.659659  150075 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:22:20.674076  150075 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:22:20.674149  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.683499  150075 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:22:20.683576  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.692184  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.701173  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.709881  150075 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:22:20.717956  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.726833  150075 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.735549  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.744269  150075 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:22:20.751430  150075 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:22:20.758908  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:20.835963  150075 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:22:20.944567  150075 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:22:20.944647  150075 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:22:20.948732  150075 start.go:563] Will wait 60s for crictl version
	I1002 21:22:20.948898  150075 ssh_runner.go:195] Run: which crictl
	I1002 21:22:20.952464  150075 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:22:20.978453  150075 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:22:20.978527  150075 ssh_runner.go:195] Run: crio --version
	I1002 21:22:21.005771  150075 ssh_runner.go:195] Run: crio --version
	I1002 21:22:21.036027  150075 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:22:21.037322  150075 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:22:21.055243  150075 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:22:21.059527  150075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:22:21.069849  150075 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:22:21.069971  150075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:22:21.070031  150075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:22:21.101888  150075 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:22:21.101912  150075 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:22:21.101969  150075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:22:21.128815  150075 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:22:21.128841  150075 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:22:21.128849  150075 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:22:21.128946  150075 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:22:21.129008  150075 ssh_runner.go:195] Run: crio config
	I1002 21:22:21.175227  150075 cni.go:84] Creating CNI manager for ""
	I1002 21:22:21.175249  150075 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:22:21.175268  150075 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:22:21.175292  150075 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:22:21.175442  150075 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:22:21.175524  150075 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:22:21.183924  150075 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:22:21.183998  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:22:21.191710  150075 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:22:21.204157  150075 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:22:21.216847  150075 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:22:21.229180  150075 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:22:21.232602  150075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:22:21.242257  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:21.318579  150075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:22:21.344180  150075 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:22:21.344201  150075 certs.go:195] generating shared ca certs ...
	I1002 21:22:21.344221  150075 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.344381  150075 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:22:21.344455  150075 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:22:21.344471  150075 certs.go:257] generating profile certs ...
	I1002 21:22:21.344584  150075 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:22:21.344614  150075 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b
	I1002 21:22:21.344641  150075 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 21:22:21.446983  150075 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b ...
	I1002 21:22:21.447017  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b: {Name:mk6b0e2c940bd92154a82058107ebf71f1ebbb7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.447214  150075 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b ...
	I1002 21:22:21.447235  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b: {Name:mke31e93943bba4dbb3760f9ef3320f515132a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.447360  150075 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:22:21.447546  150075 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:22:21.447767  150075 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:22:21.447790  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:22:21.447813  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:22:21.447840  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:22:21.447866  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:22:21.447888  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:22:21.447910  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:22:21.447928  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:22:21.447950  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:22:21.448030  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:22:21.448076  150075 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:22:21.448093  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:22:21.448129  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:22:21.448166  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:22:21.448203  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:22:21.448267  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:22:21.448395  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.448452  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.448470  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.449026  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:22:21.466820  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:22:21.484119  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:22:21.501626  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:22:21.518887  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 21:22:21.537171  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:22:21.554236  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:22:21.570920  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:22:21.587838  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:22:21.605043  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:22:21.622260  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:22:21.640014  150075 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:22:21.652571  150075 ssh_runner.go:195] Run: openssl version
	I1002 21:22:21.658564  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:22:21.666910  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.670523  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.670582  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.703921  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:22:21.712602  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:22:21.721117  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.724989  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.725046  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.759244  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:22:21.767656  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:22:21.775895  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.779618  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.779666  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.813779  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:22:21.822067  150075 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:22:21.825883  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:22:21.866534  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:22:21.912015  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:22:21.945912  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:22:21.979879  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:22:22.013644  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:22:22.047780  150075 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:22:22.047887  150075 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:22:22.047970  150075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:22:22.075277  150075 cri.go:89] found id: ""
	I1002 21:22:22.075347  150075 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:22:22.083258  150075 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:22:22.083281  150075 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:22:22.083323  150075 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:22:22.090708  150075 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:22:22.091116  150075 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:22.091239  150075 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "ha-798711" cluster setting kubeconfig missing "ha-798711" context setting]
	I1002 21:22:22.091509  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.092053  150075 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:22:22.092484  150075 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:22:22.092513  150075 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:22:22.092520  150075 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:22:22.092527  150075 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:22:22.092533  150075 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:22:22.092541  150075 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 21:22:22.092912  150075 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:22:22.100699  150075 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 21:22:22.100750  150075 kubeadm.go:601] duration metric: took 17.449388ms to restartPrimaryControlPlane
	I1002 21:22:22.100763  150075 kubeadm.go:402] duration metric: took 53.015548ms to StartCluster
	I1002 21:22:22.100793  150075 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.100863  150075 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:22.101328  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.101526  150075 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:22:22.101599  150075 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:22:22.101708  150075 addons.go:69] Setting storage-provisioner=true in profile "ha-798711"
	I1002 21:22:22.101724  150075 addons.go:69] Setting default-storageclass=true in profile "ha-798711"
	I1002 21:22:22.101730  150075 addons.go:238] Setting addon storage-provisioner=true in "ha-798711"
	I1002 21:22:22.101761  150075 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-798711"
	I1002 21:22:22.101773  150075 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:22:22.101780  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:22.102091  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.102244  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.105321  150075 out.go:179] * Verifying Kubernetes components...
	I1002 21:22:22.106401  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:22.123447  150075 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:22:22.123864  150075 addons.go:238] Setting addon default-storageclass=true in "ha-798711"
	I1002 21:22:22.123914  150075 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:22:22.124404  150075 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:22:22.124445  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.126097  150075 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:22:22.126118  150075 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:22:22.126171  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:22.150416  150075 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:22:22.150449  150075 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:22:22.150520  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:22.152329  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:22.170571  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:22.208965  150075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:22:22.222284  150075 node_ready.go:35] waiting up to 6m0s for node "ha-798711" to be "Ready" ...
	I1002 21:22:22.262973  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:22:22.276007  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:22.318565  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.318610  150075 retry.go:31] will retry after 332.195139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:22.330944  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.330979  150075 retry.go:31] will retry after 241.604509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.573473  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:22.625933  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.625970  150075 retry.go:31] will retry after 389.818611ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.651126  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:22.705410  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.705448  150075 retry.go:31] will retry after 411.67483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.016466  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:23.071260  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.071295  150075 retry.go:31] will retry after 753.441438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.117424  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:23.170606  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.170639  150075 retry.go:31] will retry after 431.491329ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.602877  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:23.656559  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.656604  150075 retry.go:31] will retry after 803.011573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.825495  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:23.879546  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.879578  150075 retry.go:31] will retry after 1.121081737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:24.223463  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:24.459804  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:24.512250  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:24.512284  150075 retry.go:31] will retry after 747.175184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.001471  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:25.053899  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.053932  150075 retry.go:31] will retry after 1.702879471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.259962  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:25.312491  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.312520  150075 retry.go:31] will retry after 2.01426178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:26.223587  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:26.757048  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:26.809444  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:26.809483  150075 retry.go:31] will retry after 2.829127733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:27.327650  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:27.381974  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:27.382001  150075 retry.go:31] will retry after 1.605113332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:28.722986  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:28.987350  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:29.041150  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.041187  150075 retry.go:31] will retry after 4.091564679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.639405  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:29.692785  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.692826  150075 retry.go:31] will retry after 2.435801898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:30.723515  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:32.129391  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:32.183937  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:32.183967  150075 retry.go:31] will retry after 5.528972353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:32.723587  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:33.133098  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:33.186015  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:33.186053  150075 retry.go:31] will retry after 4.643721978s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:34.723860  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:37.223085  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:37.713796  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:37.767671  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.767703  150075 retry.go:31] will retry after 3.727470036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.830928  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:37.886261  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.886294  150075 retry.go:31] will retry after 13.888557881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:39.223775  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:41.495407  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:41.550433  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:41.550498  150075 retry.go:31] will retry after 13.30056895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:41.723179  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:43.723398  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:45.723862  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:48.223047  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:50.223396  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:51.775552  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:51.828821  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:51.828857  150075 retry.go:31] will retry after 14.281203927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:52.723079  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:54.723640  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:54.851897  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:54.905538  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:54.905568  150075 retry.go:31] will retry after 21.127211543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:57.223010  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:59.723028  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:01.723282  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:04.222978  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:06.110868  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:23:06.164215  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:06.164246  150075 retry.go:31] will retry after 25.963131147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:06.223894  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:08.723805  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:11.223497  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:13.723285  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:16.033300  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:23:16.087245  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:16.087290  150075 retry.go:31] will retry after 24.207208905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:16.222891  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:18.223511  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:20.723507  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:23.223576  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:25.723259  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:27.723840  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:30.223437  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:32.127869  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:23:32.182828  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:32.182857  150075 retry.go:31] will retry after 38.777289106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:32.723619  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:35.223273  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:37.723255  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:40.223157  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:40.295431  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:23:40.348642  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:40.348800  150075 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 21:23:42.223230  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:44.223799  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:46.723897  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:49.223246  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:51.722939  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:53.723114  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:55.723163  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:58.222999  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:00.722961  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:02.723843  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:05.223568  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:07.723531  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:10.223448  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:24:10.961153  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:24:11.016917  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:24:11.017060  150075 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 21:24:11.019773  150075 out.go:179] * Enabled addons: 
	I1002 21:24:11.021818  150075 addons.go:514] duration metric: took 1m48.920205848s for enable addons: enabled=[]
	W1002 21:24:12.723331  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:15.223307  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:17.723001  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:19.723516  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:21.723927  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:24.223154  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:26.223282  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:28.723217  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:30.723311  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:32.723577  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:35.223036  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:37.723107  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:40.223161  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:42.223328  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:44.723125  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:46.723240  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:49.223138  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:51.223190  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:53.723144  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:55.723182  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:58.222963  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:00.223030  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:02.223351  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:04.723125  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:07.222864  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:09.723830  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:12.223189  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:14.722887  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:17.222842  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:19.223820  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:21.723910  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:24.223044  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:26.722924  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:29.222844  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:31.223182  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:33.223469  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:35.223850  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:37.223941  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:39.723890  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:42.223202  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:44.723088  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:46.723135  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:49.222868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:51.223816  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:53.723191  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:56.223164  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:58.722931  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:00.723033  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:02.723294  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:05.223262  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:07.723200  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:10.223269  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:12.223379  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:14.223724  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:16.722876  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:18.723868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:21.223245  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:23.223816  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:25.723025  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:28.222964  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:30.223266  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:32.223312  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:34.723126  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:36.723233  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:39.223187  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:41.722991  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:43.723330  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:46.223283  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:48.723098  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:50.723295  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:52.723368  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:55.223397  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:57.723073  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:00.223143  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:02.223368  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:04.723206  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:07.223122  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:09.722963  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:11.723120  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:13.723253  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:16.223315  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:18.723151  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:20.723332  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:22.723492  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:25.223778  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:27.223886  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:29.722952  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:31.723111  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:33.723288  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:35.723349  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:38.222868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:40.223010  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:42.223219  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:44.723168  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:47.223089  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:49.722908  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:51.723048  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:53.723217  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:56.223185  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:58.723069  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:00.723272  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:02.723378  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:05.223321  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:07.722992  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:10.222865  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:12.722875  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:15.223071  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:17.722867  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:19.723806  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:22.222870  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1002 21:28:22.222917  150075 node_ready.go:38] duration metric: took 6m0.000594512s for node "ha-798711" to be "Ready" ...
	I1002 21:28:22.225366  150075 out.go:203] 
	W1002 21:28:22.227274  150075 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 21:28:22.227288  150075 out.go:285] * 
	W1002 21:28:22.228925  150075 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:28:22.230006  150075 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.458385588Z" level=info msg="createCtr: removing container 2d895c7715fba47e1668d30a174d36e0dd5ac4e75af7e3971c3a0d90d2913a3c" id=ba9e1481-8bb8-4257-a4dc-19339594beab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.458420695Z" level=info msg="createCtr: deleting container 2d895c7715fba47e1668d30a174d36e0dd5ac4e75af7e3971c3a0d90d2913a3c from storage" id=ba9e1481-8bb8-4257-a4dc-19339594beab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:18 ha-798711 crio[516]: time="2025-10-02T21:28:18.460494845Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-798711_kube-system_4a40991d7a1715abba4b4bde50171ddc_0" id=ba9e1481-8bb8-4257-a4dc-19339594beab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.432686474Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=da8ba01b-9f07-457b-ad29-06a2def696de name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.433649162Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=69ecacb2-c8b3-45ad-8a5c-517fbf193e68 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.434633944Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.434937813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.439279728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.439711047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.454307287Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.455686809Z" level=info msg="createCtr: deleting container ID 2adae903f4b201a327a48baffe455ef0c7bddff88a8f857ea028ffc09d17ac44 from idIndex" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.455723879Z" level=info msg="createCtr: removing container 2adae903f4b201a327a48baffe455ef0c7bddff88a8f857ea028ffc09d17ac44" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.455772875Z" level=info msg="createCtr: deleting container 2adae903f4b201a327a48baffe455ef0c7bddff88a8f857ea028ffc09d17ac44 from storage" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.457884491Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.432483409Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=50984c33-0614-4388-80fd-5b4fa4fe200b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.433327886Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=6f59d60c-fa94-4160-8044-eb4c3ea245e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.434208755Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-798711/kube-scheduler" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.434425285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.43746206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.437854739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.456669279Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.458143479Z" level=info msg="createCtr: deleting container ID 07fe9ad5549ac9544eeae1cc5b50958f43361dac9dd4666f8969a1c2df98fd11 from idIndex" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.458179821Z" level=info msg="createCtr: removing container 07fe9ad5549ac9544eeae1cc5b50958f43361dac9dd4666f8969a1c2df98fd11" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.45821007Z" level=info msg="createCtr: deleting container 07fe9ad5549ac9544eeae1cc5b50958f43361dac9dd4666f8969a1c2df98fd11 from storage" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.460218811Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:28:25.008223    2180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:25.008828    2180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:25.010392    2180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:25.010942    2180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:25.012487    2180 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:28:25 up  3:10,  0 user,  load average: 0.00, 0.02, 0.08
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:28:18 ha-798711 kubelet[664]:  > podSandboxID="070852ea1215d81f1475b26ef3649aac90a5ff4592155f15c001f51f44edae5c"
	Oct 02 21:28:18 ha-798711 kubelet[664]: E1002 21:28:18.460897     664 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:28:18 ha-798711 kubelet[664]:         container kube-apiserver start failed in pod kube-apiserver-ha-798711_kube-system(4a40991d7a1715abba4b4bde50171ddc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:18 ha-798711 kubelet[664]:  > logger="UnhandledError"
	Oct 02 21:28:18 ha-798711 kubelet[664]: E1002 21:28:18.460930     664 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-798711" podUID="4a40991d7a1715abba4b4bde50171ddc"
	Oct 02 21:28:21 ha-798711 kubelet[664]: E1002 21:28:21.446166     664 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-798711\" not found"
	Oct 02 21:28:22 ha-798711 kubelet[664]: E1002 21:28:22.432216     664 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:28:22 ha-798711 kubelet[664]: E1002 21:28:22.458200     664 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:28:22 ha-798711 kubelet[664]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:22 ha-798711 kubelet[664]:  > podSandboxID="8e469375d261403293181d2e6c93e44842cb95d59dfe04c34347b112296eedcd"
	Oct 02 21:28:22 ha-798711 kubelet[664]: E1002 21:28:22.458324     664 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:28:22 ha-798711 kubelet[664]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:22 ha-798711 kubelet[664]:  > logger="UnhandledError"
	Oct 02 21:28:22 ha-798711 kubelet[664]: E1002 21:28:22.458364     664 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:28:23 ha-798711 kubelet[664]: E1002 21:28:23.432057     664 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:28:23 ha-798711 kubelet[664]: E1002 21:28:23.460525     664 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:28:23 ha-798711 kubelet[664]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:23 ha-798711 kubelet[664]:  > podSandboxID="c5eca8f912983184575adf6cbf6a699ab5f4fb71ea1b207b353c78066449782f"
	Oct 02 21:28:23 ha-798711 kubelet[664]: E1002 21:28:23.460628     664 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:28:23 ha-798711 kubelet[664]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:23 ha-798711 kubelet[664]:  > logger="UnhandledError"
	Oct 02 21:28:23 ha-798711 kubelet[664]: E1002 21:28:23.460658     664 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:28:24 ha-798711 kubelet[664]: E1002 21:28:24.067160     664 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:28:24 ha-798711 kubelet[664]: I1002 21:28:24.239574     664 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:28:24 ha-798711 kubelet[664]: E1002 21:28:24.240000     664 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 2 (294.192493ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-798711" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-798711\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-798711\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-798711\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 150286,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:22:15.276299903Z",
	            "FinishedAt": "2025-10-02T21:22:14.109000009Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfc5898f1fb70247184429418ec47913fc23394ca8038e3769c9426461a4d69e",
	            "SandboxKey": "/var/run/docker/netns/cfc5898f1fb7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:38:19:25:8d:2d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "94b8c1eb9ead0eb293cb635b12ce5567ff3da80e11af8a8897a1fe25f10ab496",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
E1002 21:28:25.851441   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 2 (290.741295ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                    ARGS                                     │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-798711 kubectl -- rollout status deployment/busybox                      │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'       │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node add --alsologtostderr -v 5                                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node stop m02 --alsologtostderr -v 5                              │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node start m02 --alsologtostderr -v 5                             │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                  │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                       │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │ 02 Oct 25 21:22 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5                          │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                  │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ node    │ ha-798711 node delete m03 --alsologtostderr -v 5                            │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:22:15
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:22:15.033227  150075 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:22:15.033502  150075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:15.033514  150075 out.go:374] Setting ErrFile to fd 2...
	I1002 21:22:15.033519  150075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:22:15.033759  150075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:22:15.034237  150075 out.go:368] Setting JSON to false
	I1002 21:22:15.035218  150075 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11076,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:22:15.035319  150075 start.go:140] virtualization: kvm guest
	I1002 21:22:15.037453  150075 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:22:15.038781  150075 notify.go:220] Checking for updates...
	I1002 21:22:15.038868  150075 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:22:15.040220  150075 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:22:15.041802  150075 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:15.043133  150075 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:22:15.044244  150075 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:22:15.047976  150075 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:22:15.049912  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:15.050054  150075 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:22:15.074981  150075 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:22:15.075111  150075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:15.135266  150075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:22:15.124689773 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:22:15.135396  150075 docker.go:318] overlay module found
	I1002 21:22:15.137632  150075 out.go:179] * Using the docker driver based on existing profile
	I1002 21:22:15.139159  150075 start.go:304] selected driver: docker
	I1002 21:22:15.139180  150075 start.go:924] validating driver "docker" against &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:22:15.139298  150075 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:22:15.139392  150075 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:22:15.200879  150075 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:22:15.189950344 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:22:15.201570  150075 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:22:15.201600  150075 cni.go:84] Creating CNI manager for ""
	I1002 21:22:15.201660  150075 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:22:15.201704  150075 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 21:22:15.204229  150075 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:22:15.206112  150075 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:22:15.207484  150075 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:22:15.208801  150075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:22:15.208851  150075 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:22:15.208877  150075 cache.go:58] Caching tarball of preloaded images
	I1002 21:22:15.208924  150075 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:22:15.208992  150075 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:22:15.209009  150075 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:22:15.209155  150075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:22:15.230453  150075 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:22:15.230479  150075 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:22:15.230497  150075 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:22:15.230539  150075 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:22:15.230610  150075 start.go:364] duration metric: took 49.005µs to acquireMachinesLock for "ha-798711"
	I1002 21:22:15.230632  150075 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:22:15.230641  150075 fix.go:54] fixHost starting: 
	I1002 21:22:15.230913  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:15.248494  150075 fix.go:112] recreateIfNeeded on ha-798711: state=Stopped err=<nil>
	W1002 21:22:15.248525  150075 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:22:15.250320  150075 out.go:252] * Restarting existing docker container for "ha-798711" ...
	I1002 21:22:15.250414  150075 cli_runner.go:164] Run: docker start ha-798711
	I1002 21:22:15.496577  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:15.515851  150075 kic.go:430] container "ha-798711" state is running.
	I1002 21:22:15.516281  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:15.535909  150075 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:22:15.536173  150075 machine.go:93] provisionDockerMachine start ...
	I1002 21:22:15.536238  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:15.556184  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:15.556419  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:15.556431  150075 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:22:15.557155  150075 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39014->127.0.0.1:32788: read: connection reset by peer
	I1002 21:22:18.704850  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:22:18.704885  150075 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:22:18.704951  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:18.724541  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:18.724776  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:18.724790  150075 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:22:18.878693  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:22:18.878789  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:18.897725  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:18.898007  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:18.898028  150075 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:22:19.043337  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:22:19.043394  150075 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:22:19.043439  150075 ubuntu.go:190] setting up certificates
	I1002 21:22:19.043451  150075 provision.go:84] configureAuth start
	I1002 21:22:19.043518  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:19.062653  150075 provision.go:143] copyHostCerts
	I1002 21:22:19.062709  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:22:19.062765  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:22:19.062785  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:22:19.062971  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:22:19.063173  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:22:19.063210  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:22:19.063218  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:22:19.063299  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:22:19.063404  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:22:19.063433  150075 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:22:19.063444  150075 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:22:19.063504  150075 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:22:19.063759  150075 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:22:19.271876  150075 provision.go:177] copyRemoteCerts
	I1002 21:22:19.271944  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:22:19.271986  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.290698  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.393792  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:22:19.393854  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:22:19.412595  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:22:19.412678  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:22:19.430937  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:22:19.431019  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:22:19.448487  150075 provision.go:87] duration metric: took 405.011038ms to configureAuth
	I1002 21:22:19.448522  150075 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:22:19.448707  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:19.448848  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.467458  150075 main.go:141] libmachine: Using SSH client type: native
	I1002 21:22:19.467750  150075 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I1002 21:22:19.467775  150075 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:22:19.727855  150075 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:22:19.727881  150075 machine.go:96] duration metric: took 4.191691329s to provisionDockerMachine
	I1002 21:22:19.727897  150075 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:22:19.727909  150075 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:22:19.727963  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:22:19.727998  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.747356  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.850943  150075 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:22:19.854607  150075 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:22:19.854646  150075 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:22:19.854661  150075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:22:19.854725  150075 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:22:19.854841  150075 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:22:19.854858  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:22:19.854946  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:22:19.862484  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:22:19.879842  150075 start.go:296] duration metric: took 151.928837ms for postStartSetup
	I1002 21:22:19.879935  150075 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:22:19.879987  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:19.898140  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:19.997148  150075 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:22:20.001838  150075 fix.go:56] duration metric: took 4.771191361s for fixHost
	I1002 21:22:20.001860  150075 start.go:83] releasing machines lock for "ha-798711", held for 4.771239186s
	I1002 21:22:20.001919  150075 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:22:20.019213  150075 ssh_runner.go:195] Run: cat /version.json
	I1002 21:22:20.019277  150075 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:22:20.019282  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:20.019335  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:20.038496  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:20.038883  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:20.136993  150075 ssh_runner.go:195] Run: systemctl --version
	I1002 21:22:20.196437  150075 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:22:20.232211  150075 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:22:20.237052  150075 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:22:20.237111  150075 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:22:20.245114  150075 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:22:20.245140  150075 start.go:495] detecting cgroup driver to use...
	I1002 21:22:20.245171  150075 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:22:20.245228  150075 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:22:20.259645  150075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:22:20.272718  150075 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:22:20.272788  150075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:22:20.287297  150075 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:22:20.300307  150075 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:22:20.378191  150075 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:22:20.461383  150075 docker.go:234] disabling docker service ...
	I1002 21:22:20.461445  150075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:22:20.475694  150075 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:22:20.488378  150075 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:22:20.566714  150075 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:22:20.647020  150075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:22:20.659659  150075 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:22:20.674076  150075 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:22:20.674149  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.683499  150075 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:22:20.683576  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.692184  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.701173  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.709881  150075 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:22:20.717956  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.726833  150075 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.735549  150075 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:22:20.744269  150075 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:22:20.751430  150075 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:22:20.758908  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:20.835963  150075 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:22:20.944567  150075 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:22:20.944647  150075 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:22:20.948732  150075 start.go:563] Will wait 60s for crictl version
	I1002 21:22:20.948898  150075 ssh_runner.go:195] Run: which crictl
	I1002 21:22:20.952464  150075 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:22:20.978453  150075 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:22:20.978527  150075 ssh_runner.go:195] Run: crio --version
	I1002 21:22:21.005771  150075 ssh_runner.go:195] Run: crio --version
	I1002 21:22:21.036027  150075 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:22:21.037322  150075 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:22:21.055243  150075 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:22:21.059527  150075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:22:21.069849  150075 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:22:21.069971  150075 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:22:21.070031  150075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:22:21.101888  150075 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:22:21.101912  150075 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:22:21.101969  150075 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:22:21.128815  150075 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:22:21.128841  150075 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:22:21.128849  150075 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:22:21.128946  150075 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:22:21.129008  150075 ssh_runner.go:195] Run: crio config
	I1002 21:22:21.175227  150075 cni.go:84] Creating CNI manager for ""
	I1002 21:22:21.175249  150075 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:22:21.175268  150075 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:22:21.175292  150075 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:22:21.175442  150075 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:22:21.175524  150075 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:22:21.183924  150075 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:22:21.183998  150075 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:22:21.191710  150075 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:22:21.204157  150075 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:22:21.216847  150075 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:22:21.229180  150075 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:22:21.232602  150075 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:22:21.242257  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:21.318579  150075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:22:21.344180  150075 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:22:21.344201  150075 certs.go:195] generating shared ca certs ...
	I1002 21:22:21.344221  150075 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.344381  150075 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:22:21.344455  150075 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:22:21.344471  150075 certs.go:257] generating profile certs ...
	I1002 21:22:21.344584  150075 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:22:21.344614  150075 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b
	I1002 21:22:21.344641  150075 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1002 21:22:21.446983  150075 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b ...
	I1002 21:22:21.447017  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b: {Name:mk6b0e2c940bd92154a82058107ebf71f1ebbb7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.447214  150075 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b ...
	I1002 21:22:21.447235  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b: {Name:mke31e93943bba4dbb3760f9ef3320f515132a8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:21.447360  150075 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt.591e0d3b -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt
	I1002 21:22:21.447546  150075 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key
	I1002 21:22:21.447767  150075 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:22:21.447790  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:22:21.447813  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:22:21.447840  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:22:21.447866  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:22:21.447888  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:22:21.447910  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:22:21.447928  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:22:21.447950  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:22:21.448030  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:22:21.448076  150075 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:22:21.448093  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:22:21.448129  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:22:21.448166  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:22:21.448203  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:22:21.448267  150075 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:22:21.448395  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.448452  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.448470  150075 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.449026  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:22:21.466820  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:22:21.484119  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:22:21.501626  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:22:21.518887  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 21:22:21.537171  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:22:21.554236  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:22:21.570920  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:22:21.587838  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:22:21.605043  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:22:21.622260  150075 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:22:21.640014  150075 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:22:21.652571  150075 ssh_runner.go:195] Run: openssl version
	I1002 21:22:21.658564  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:22:21.666910  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.670523  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.670582  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:22:21.703921  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:22:21.712602  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:22:21.721117  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.724989  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.725046  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:22:21.759244  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:22:21.767656  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:22:21.775895  150075 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.779618  150075 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.779666  150075 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:22:21.813779  150075 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:22:21.822067  150075 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:22:21.825883  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:22:21.866534  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:22:21.912015  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:22:21.945912  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:22:21.979879  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:22:22.013644  150075 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:22:22.047780  150075 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:22:22.047887  150075 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:22:22.047970  150075 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:22:22.075277  150075 cri.go:89] found id: ""
	I1002 21:22:22.075347  150075 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:22:22.083258  150075 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:22:22.083281  150075 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:22:22.083323  150075 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:22:22.090708  150075 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:22:22.091116  150075 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:22.091239  150075 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "ha-798711" cluster setting kubeconfig missing "ha-798711" context setting]
	I1002 21:22:22.091509  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.092053  150075 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:22:22.092484  150075 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:22:22.092513  150075 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:22:22.092520  150075 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:22:22.092527  150075 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:22:22.092533  150075 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:22:22.092541  150075 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 21:22:22.092912  150075 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:22:22.100699  150075 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 21:22:22.100750  150075 kubeadm.go:601] duration metric: took 17.449388ms to restartPrimaryControlPlane
	I1002 21:22:22.100763  150075 kubeadm.go:402] duration metric: took 53.015548ms to StartCluster
	I1002 21:22:22.100793  150075 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.100863  150075 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:22:22.101328  150075 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:22:22.101526  150075 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:22:22.101599  150075 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:22:22.101708  150075 addons.go:69] Setting storage-provisioner=true in profile "ha-798711"
	I1002 21:22:22.101724  150075 addons.go:69] Setting default-storageclass=true in profile "ha-798711"
	I1002 21:22:22.101730  150075 addons.go:238] Setting addon storage-provisioner=true in "ha-798711"
	I1002 21:22:22.101761  150075 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-798711"
	I1002 21:22:22.101773  150075 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:22:22.101780  150075 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:22:22.102091  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.102244  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.105321  150075 out.go:179] * Verifying Kubernetes components...
	I1002 21:22:22.106401  150075 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:22:22.123447  150075 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:22:22.123864  150075 addons.go:238] Setting addon default-storageclass=true in "ha-798711"
	I1002 21:22:22.123914  150075 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:22:22.124404  150075 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:22:22.124445  150075 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:22:22.126097  150075 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:22:22.126118  150075 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:22:22.126171  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:22.150416  150075 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:22:22.150449  150075 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:22:22.150520  150075 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:22:22.152329  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:22.170571  150075 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:22:22.208965  150075 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:22:22.222284  150075 node_ready.go:35] waiting up to 6m0s for node "ha-798711" to be "Ready" ...
	I1002 21:22:22.262973  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:22:22.276007  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:22.318565  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.318610  150075 retry.go:31] will retry after 332.195139ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:22.330944  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.330979  150075 retry.go:31] will retry after 241.604509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.573473  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:22.625933  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.625970  150075 retry.go:31] will retry after 389.818611ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.651126  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:22.705410  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:22.705448  150075 retry.go:31] will retry after 411.67483ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.016466  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:23.071260  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.071295  150075 retry.go:31] will retry after 753.441438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.117424  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:23.170606  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.170639  150075 retry.go:31] will retry after 431.491329ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.602877  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:23.656559  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.656604  150075 retry.go:31] will retry after 803.011573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.825495  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:23.879546  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:23.879578  150075 retry.go:31] will retry after 1.121081737s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:24.223463  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:24.459804  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:24.512250  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:24.512284  150075 retry.go:31] will retry after 747.175184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.001471  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:25.053899  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.053932  150075 retry.go:31] will retry after 1.702879471s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.259962  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:25.312491  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:25.312520  150075 retry.go:31] will retry after 2.01426178s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:26.223587  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:26.757048  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:26.809444  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:26.809483  150075 retry.go:31] will retry after 2.829127733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:27.327650  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:27.381974  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:27.382001  150075 retry.go:31] will retry after 1.605113332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:28.722986  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:28.987350  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:29.041150  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.041187  150075 retry.go:31] will retry after 4.091564679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.639405  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:29.692785  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:29.692826  150075 retry.go:31] will retry after 2.435801898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:30.723515  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:32.129391  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:32.183937  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:32.183967  150075 retry.go:31] will retry after 5.528972353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:32.723587  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:33.133098  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:33.186015  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:33.186053  150075 retry.go:31] will retry after 4.643721978s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:34.723860  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:37.223085  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:37.713796  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:37.767671  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.767703  150075 retry.go:31] will retry after 3.727470036s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.830928  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:37.886261  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:37.886294  150075 retry.go:31] will retry after 13.888557881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:39.223775  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:41.495407  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:41.550433  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:41.550498  150075 retry.go:31] will retry after 13.30056895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:41.723179  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:43.723398  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:45.723862  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:48.223047  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:50.223396  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:51.775552  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:22:51.828821  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:51.828857  150075 retry.go:31] will retry after 14.281203927s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:52.723079  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:54.723640  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:22:54.851897  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:22:54.905538  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:22:54.905568  150075 retry.go:31] will retry after 21.127211543s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:22:57.223010  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:22:59.723028  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:01.723282  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:04.222978  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:06.110868  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:23:06.164215  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:06.164246  150075 retry.go:31] will retry after 25.963131147s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:06.223894  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:08.723805  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:11.223497  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:13.723285  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:16.033300  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:23:16.087245  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:16.087290  150075 retry.go:31] will retry after 24.207208905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:16.222891  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:18.223511  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:20.723507  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:23.223576  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:25.723259  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:27.723840  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:30.223437  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:32.127869  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:23:32.182828  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:23:32.182857  150075 retry.go:31] will retry after 38.777289106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:32.723619  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:35.223273  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:37.723255  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:40.223157  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:23:40.295431  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:23:40.348642  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:23:40.348800  150075 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 21:23:42.223230  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:44.223799  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:46.723897  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:49.223246  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:51.722939  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:53.723114  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:55.723163  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:23:58.222999  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:00.722961  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:02.723843  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:05.223568  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:07.723531  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:10.223448  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:24:10.961153  150075 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:24:11.016917  150075 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:24:11.017060  150075 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 21:24:11.019773  150075 out.go:179] * Enabled addons: 
	I1002 21:24:11.021818  150075 addons.go:514] duration metric: took 1m48.920205848s for enable addons: enabled=[]
	W1002 21:24:12.723331  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:15.223307  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:17.723001  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:19.723516  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:21.723927  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:24.223154  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:26.223282  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:28.723217  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:30.723311  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:32.723577  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:35.223036  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:37.723107  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:40.223161  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:42.223328  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:44.723125  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:46.723240  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:49.223138  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:51.223190  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:53.723144  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:55.723182  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:24:58.222963  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:00.223030  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:02.223351  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:04.723125  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:07.222864  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:09.723830  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:12.223189  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:14.722887  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:17.222842  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:19.223820  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:21.723910  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:24.223044  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:26.722924  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:29.222844  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:31.223182  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:33.223469  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:35.223850  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:37.223941  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:39.723890  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:42.223202  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:44.723088  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:46.723135  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:49.222868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:51.223816  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:53.723191  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:56.223164  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:25:58.722931  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:00.723033  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:02.723294  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:05.223262  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:07.723200  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:10.223269  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:12.223379  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:14.223724  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:16.722876  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:18.723868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:21.223245  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:23.223816  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:25.723025  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:28.222964  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:30.223266  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:32.223312  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:34.723126  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:36.723233  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:39.223187  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:41.722991  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:43.723330  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:46.223283  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:48.723098  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:50.723295  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:52.723368  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:55.223397  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:26:57.723073  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:00.223143  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:02.223368  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:04.723206  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:07.223122  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:09.722963  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:11.723120  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:13.723253  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:16.223315  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:18.723151  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:20.723332  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:22.723492  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:25.223778  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:27.223886  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:29.722952  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:31.723111  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:33.723288  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:35.723349  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:38.222868  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:40.223010  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:42.223219  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:44.723168  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:47.223089  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:49.722908  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:51.723048  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:53.723217  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:56.223185  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:27:58.723069  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:00.723272  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:02.723378  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:05.223321  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:07.722992  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:10.222865  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:12.722875  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:15.223071  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:17.722867  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:19.723806  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:22.222870  150075 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
	I1002 21:28:22.222917  150075 node_ready.go:38] duration metric: took 6m0.000594512s for node "ha-798711" to be "Ready" ...
	I1002 21:28:22.225366  150075 out.go:203] 
	W1002 21:28:22.227274  150075 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 21:28:22.227288  150075 out.go:285] * 
	W1002 21:28:22.228925  150075 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:28:22.230006  150075 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.455723879Z" level=info msg="createCtr: removing container 2adae903f4b201a327a48baffe455ef0c7bddff88a8f857ea028ffc09d17ac44" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.455772875Z" level=info msg="createCtr: deleting container 2adae903f4b201a327a48baffe455ef0c7bddff88a8f857ea028ffc09d17ac44 from storage" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:22 ha-798711 crio[516]: time="2025-10-02T21:28:22.457884491Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=a2aeed6d-c19c-48eb-b326-9a36f5e64138 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.432483409Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=50984c33-0614-4388-80fd-5b4fa4fe200b name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.433327886Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=6f59d60c-fa94-4160-8044-eb4c3ea245e6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.434208755Z" level=info msg="Creating container: kube-system/kube-scheduler-ha-798711/kube-scheduler" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.434425285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.43746206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.437854739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.456669279Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.458143479Z" level=info msg="createCtr: deleting container ID 07fe9ad5549ac9544eeae1cc5b50958f43361dac9dd4666f8969a1c2df98fd11 from idIndex" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.458179821Z" level=info msg="createCtr: removing container 07fe9ad5549ac9544eeae1cc5b50958f43361dac9dd4666f8969a1c2df98fd11" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.45821007Z" level=info msg="createCtr: deleting container 07fe9ad5549ac9544eeae1cc5b50958f43361dac9dd4666f8969a1c2df98fd11 from storage" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:23 ha-798711 crio[516]: time="2025-10-02T21:28:23.460218811Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=c2756bec-958a-496f-9d51-e9660843317f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:26 ha-798711 crio[516]: time="2025-10-02T21:28:26.43229313Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=c38f641b-063d-49a1-96fd-b040ec1f77ed name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:26 ha-798711 crio[516]: time="2025-10-02T21:28:26.433221707Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e8629d2a-2e5a-4dd3-bd47-f9d50eda8333 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:28:26 ha-798711 crio[516]: time="2025-10-02T21:28:26.434272131Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-798711/kube-controller-manager" id=78ae3489-063a-49f7-b856-321a1b3a53bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:26 ha-798711 crio[516]: time="2025-10-02T21:28:26.434521423Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:26 ha-798711 crio[516]: time="2025-10-02T21:28:26.437930769Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:26 ha-798711 crio[516]: time="2025-10-02T21:28:26.438332446Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:28:26 ha-798711 crio[516]: time="2025-10-02T21:28:26.451807257Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=78ae3489-063a-49f7-b856-321a1b3a53bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:26 ha-798711 crio[516]: time="2025-10-02T21:28:26.453173343Z" level=info msg="createCtr: deleting container ID 95fb154644db21e076eeb7038be49494860b988f51d504cdae8f11988e8764e0 from idIndex" id=78ae3489-063a-49f7-b856-321a1b3a53bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:26 ha-798711 crio[516]: time="2025-10-02T21:28:26.45320435Z" level=info msg="createCtr: removing container 95fb154644db21e076eeb7038be49494860b988f51d504cdae8f11988e8764e0" id=78ae3489-063a-49f7-b856-321a1b3a53bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:26 ha-798711 crio[516]: time="2025-10-02T21:28:26.453235062Z" level=info msg="createCtr: deleting container 95fb154644db21e076eeb7038be49494860b988f51d504cdae8f11988e8764e0 from storage" id=78ae3489-063a-49f7-b856-321a1b3a53bc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:28:26 ha-798711 crio[516]: time="2025-10-02T21:28:26.4556783Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=78ae3489-063a-49f7-b856-321a1b3a53bc name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:28:26.599627    2360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:26.600182    2360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:26.601939    2360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:26.602401    2360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:28:26.604038    2360 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:28:26 up  3:10,  0 user,  load average: 0.00, 0.02, 0.08
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:28:22 ha-798711 kubelet[664]:  > podSandboxID="8e469375d261403293181d2e6c93e44842cb95d59dfe04c34347b112296eedcd"
	Oct 02 21:28:22 ha-798711 kubelet[664]: E1002 21:28:22.458324     664 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:28:22 ha-798711 kubelet[664]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:22 ha-798711 kubelet[664]:  > logger="UnhandledError"
	Oct 02 21:28:22 ha-798711 kubelet[664]: E1002 21:28:22.458364     664 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:28:23 ha-798711 kubelet[664]: E1002 21:28:23.432057     664 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:28:23 ha-798711 kubelet[664]: E1002 21:28:23.460525     664 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:28:23 ha-798711 kubelet[664]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:23 ha-798711 kubelet[664]:  > podSandboxID="c5eca8f912983184575adf6cbf6a699ab5f4fb71ea1b207b353c78066449782f"
	Oct 02 21:28:23 ha-798711 kubelet[664]: E1002 21:28:23.460628     664 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:28:23 ha-798711 kubelet[664]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:23 ha-798711 kubelet[664]:  > logger="UnhandledError"
	Oct 02 21:28:23 ha-798711 kubelet[664]: E1002 21:28:23.460658     664 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:28:24 ha-798711 kubelet[664]: E1002 21:28:24.067160     664 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:28:24 ha-798711 kubelet[664]: I1002 21:28:24.239574     664 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:28:24 ha-798711 kubelet[664]: E1002 21:28:24.240000     664 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:28:26 ha-798711 kubelet[664]: E1002 21:28:26.359634     664 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac97c98cb5418  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:22:21.418189848 +0000 UTC m=+0.072153483,LastTimestamp:2025-10-02 21:22:21.418189848 +0000 UTC m=+0.072153483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:28:26 ha-798711 kubelet[664]: E1002 21:28:26.431779     664 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:28:26 ha-798711 kubelet[664]: E1002 21:28:26.456026     664 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:28:26 ha-798711 kubelet[664]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:26 ha-798711 kubelet[664]:  > podSandboxID="d2208349b9d7e6504aeb46fe8481567bca13bc20c1c861508f54266936ccbf9f"
	Oct 02 21:28:26 ha-798711 kubelet[664]: E1002 21:28:26.456136     664 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:28:26 ha-798711 kubelet[664]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:28:26 ha-798711 kubelet[664]:  > logger="UnhandledError"
	Oct 02 21:28:26 ha-798711 kubelet[664]: E1002 21:28:26.456168     664 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 2 (295.679453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-798711 stop --alsologtostderr -v 5: (1.207952266s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5: exit status 7 (67.815964ms)

                                                
                                                
-- stdout --
	ha-798711
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:28:28.243438  155619 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:28:28.243731  155619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.243753  155619 out.go:374] Setting ErrFile to fd 2...
	I1002 21:28:28.243757  155619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.244002  155619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:28:28.244186  155619 out.go:368] Setting JSON to false
	I1002 21:28:28.244216  155619 mustload.go:65] Loading cluster: ha-798711
	I1002 21:28:28.244368  155619 notify.go:220] Checking for updates...
	I1002 21:28:28.244801  155619 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:28.244823  155619 status.go:174] checking status of ha-798711 ...
	I1002 21:28:28.245404  155619 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:28.263933  155619 status.go:371] ha-798711 host status = "Stopped" (err=<nil>)
	I1002 21:28:28.263970  155619 status.go:384] host is not running, skipping remaining checks
	I1002 21:28:28.263978  155619 status.go:176] ha-798711 status: &{Name:ha-798711 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5": ha-798711
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5": ha-798711
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-798711 status --alsologtostderr -v 5": ha-798711
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:22:15.276299903Z",
	            "FinishedAt": "2025-10-02T21:28:27.30406005Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 7 (71.820114ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-798711" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (368.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1002 21:32:02.775588   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: exit status 80 (6m7.067031212s)

                                                
                                                
-- stdout --
	* [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:28:28.403003  155675 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:28:28.403116  155675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.403125  155675 out.go:374] Setting ErrFile to fd 2...
	I1002 21:28:28.403129  155675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.403315  155675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:28:28.403776  155675 out.go:368] Setting JSON to false
	I1002 21:28:28.404642  155675 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11449,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:28:28.404726  155675 start.go:140] virtualization: kvm guest
	I1002 21:28:28.406949  155675 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:28:28.408440  155675 notify.go:220] Checking for updates...
	I1002 21:28:28.408467  155675 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:28:28.409938  155675 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:28:28.411145  155675 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:28.412417  155675 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:28:28.413758  155675 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:28:28.415028  155675 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:28:28.416927  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:28.417596  155675 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:28:28.441148  155675 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:28:28.441315  155675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:28.496626  155675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:28:28.486980606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:28.496755  155675 docker.go:318] overlay module found
	I1002 21:28:28.498705  155675 out.go:179] * Using the docker driver based on existing profile
	I1002 21:28:28.499971  155675 start.go:304] selected driver: docker
	I1002 21:28:28.499988  155675 start.go:924] validating driver "docker" against &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:28:28.500076  155675 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:28:28.500152  155675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:28.554609  155675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:28:28.545101226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:28.555297  155675 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:28:28.555338  155675 cni.go:84] Creating CNI manager for ""
	I1002 21:28:28.555400  155675 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:28:28.555463  155675 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 21:28:28.557542  155675 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:28:28.558794  155675 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:28:28.559993  155675 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:28:28.561213  155675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:28:28.561259  155675 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:28:28.561268  155675 cache.go:58] Caching tarball of preloaded images
	I1002 21:28:28.561312  155675 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:28:28.561377  155675 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:28:28.561394  155675 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:28:28.561531  155675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:28:28.581862  155675 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:28:28.581882  155675 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:28:28.581898  155675 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:28:28.581920  155675 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:28:28.581974  155675 start.go:364] duration metric: took 36.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:28:28.581991  155675 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:28:28.581998  155675 fix.go:54] fixHost starting: 
	I1002 21:28:28.582193  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:28.600330  155675 fix.go:112] recreateIfNeeded on ha-798711: state=Stopped err=<nil>
	W1002 21:28:28.600370  155675 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:28:28.602558  155675 out.go:252] * Restarting existing docker container for "ha-798711" ...
	I1002 21:28:28.602629  155675 cli_runner.go:164] Run: docker start ha-798711
	I1002 21:28:28.838867  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:28.857507  155675 kic.go:430] container "ha-798711" state is running.
	I1002 21:28:28.857953  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:28.875695  155675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:28:28.875935  155675 machine.go:93] provisionDockerMachine start ...
	I1002 21:28:28.876007  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:28.894590  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:28.894848  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:28.894862  155675 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:28:28.895489  155675 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32860->127.0.0.1:32793: read: connection reset by peer
	I1002 21:28:32.042146  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:28:32.042175  155675 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:28:32.042247  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.060169  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.060387  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.060400  155675 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:28:32.214017  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:28:32.214104  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.232113  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.232342  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.232359  155675 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:28:32.376535  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:28:32.376566  155675 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:28:32.376584  155675 ubuntu.go:190] setting up certificates
	I1002 21:28:32.376592  155675 provision.go:84] configureAuth start
	I1002 21:28:32.376642  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:32.396020  155675 provision.go:143] copyHostCerts
	I1002 21:28:32.396062  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:28:32.396100  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:28:32.396116  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:28:32.396183  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:28:32.396277  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:28:32.396305  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:28:32.396320  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:28:32.396353  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:28:32.396398  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:28:32.396415  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:28:32.396419  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:28:32.396441  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:28:32.396489  155675 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:28:32.512217  155675 provision.go:177] copyRemoteCerts
	I1002 21:28:32.512275  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:28:32.512317  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.530566  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:32.631941  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:28:32.631999  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:28:32.649350  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:28:32.649401  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:28:32.666579  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:28:32.666640  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:28:32.684729  155675 provision.go:87] duration metric: took 308.118918ms to configureAuth
	I1002 21:28:32.684867  155675 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:28:32.685043  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:32.685148  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.703210  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.703437  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.703461  155675 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:28:32.962015  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:28:32.962052  155675 machine.go:96] duration metric: took 4.086102415s to provisionDockerMachine
	I1002 21:28:32.962066  155675 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:28:32.962081  155675 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:28:32.962161  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:28:32.962205  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.980349  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.082626  155675 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:28:33.086352  155675 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:28:33.086384  155675 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:28:33.086398  155675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:28:33.086455  155675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:28:33.086573  155675 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:28:33.086598  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:28:33.086723  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:28:33.094470  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:28:33.112480  155675 start.go:296] duration metric: took 150.396395ms for postStartSetup
	I1002 21:28:33.112566  155675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:28:33.112609  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.130086  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.230100  155675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:28:33.235048  155675 fix.go:56] duration metric: took 4.65304118s for fixHost
	I1002 21:28:33.235074  155675 start.go:83] releasing machines lock for "ha-798711", held for 4.653089722s
	I1002 21:28:33.235148  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:33.253218  155675 ssh_runner.go:195] Run: cat /version.json
	I1002 21:28:33.253241  155675 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:28:33.253280  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.253330  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.273049  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.273536  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.445879  155675 ssh_runner.go:195] Run: systemctl --version
	I1002 21:28:33.452886  155675 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:28:33.488518  155675 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:28:33.493393  155675 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:28:33.493458  155675 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:28:33.501643  155675 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:28:33.501669  155675 start.go:495] detecting cgroup driver to use...
	I1002 21:28:33.501700  155675 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:28:33.501756  155675 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:28:33.515853  155675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:28:33.528213  155675 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:28:33.528272  155675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:28:33.542828  155675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:28:33.556143  155675 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:28:33.634827  155675 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:28:33.716388  155675 docker.go:234] disabling docker service ...
	I1002 21:28:33.716495  155675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:28:33.731194  155675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:28:33.744342  155675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:28:33.823830  155675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:28:33.905576  155675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:28:33.918701  155675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:28:33.933267  155675 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:28:33.933327  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.942732  155675 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:28:33.942809  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.951932  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.961276  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.970164  155675 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:28:33.978507  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.987369  155675 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.995524  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:34.004102  155675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:28:34.011220  155675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:28:34.018342  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:34.095886  155675 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:28:34.203604  155675 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:28:34.203665  155675 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:28:34.207612  155675 start.go:563] Will wait 60s for crictl version
	I1002 21:28:34.207675  155675 ssh_runner.go:195] Run: which crictl
	I1002 21:28:34.211395  155675 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:28:34.235415  155675 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:28:34.235492  155675 ssh_runner.go:195] Run: crio --version
	I1002 21:28:34.263418  155675 ssh_runner.go:195] Run: crio --version
	I1002 21:28:34.293048  155675 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:28:34.294508  155675 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:28:34.312107  155675 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:28:34.316513  155675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:28:34.327623  155675 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:28:34.327797  155675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:28:34.327859  155675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:28:34.360824  155675 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:28:34.360849  155675 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:28:34.360901  155675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:28:34.388164  155675 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:28:34.388188  155675 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:28:34.388197  155675 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:28:34.388287  155675 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:28:34.388349  155675 ssh_runner.go:195] Run: crio config
	I1002 21:28:34.434047  155675 cni.go:84] Creating CNI manager for ""
	I1002 21:28:34.434070  155675 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:28:34.434089  155675 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:28:34.434108  155675 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:28:34.434226  155675 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:28:34.434286  155675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:28:34.442337  155675 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:28:34.442397  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:28:34.450473  155675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:28:34.462634  155675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:28:34.474595  155675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:28:34.486784  155675 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:28:34.490250  155675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:28:34.499967  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:34.576427  155675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:28:34.601305  155675 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:28:34.601329  155675 certs.go:195] generating shared ca certs ...
	I1002 21:28:34.601346  155675 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:34.601512  155675 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:28:34.601558  155675 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:28:34.601570  155675 certs.go:257] generating profile certs ...
	I1002 21:28:34.601674  155675 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:28:34.601761  155675 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b
	I1002 21:28:34.601817  155675 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:28:34.601830  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:28:34.601853  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:28:34.601878  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:28:34.601897  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:28:34.601915  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:28:34.601943  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:28:34.601963  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:28:34.601979  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:28:34.602044  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:28:34.602085  155675 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:28:34.602098  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:28:34.602132  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:28:34.602161  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:28:34.602187  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:28:34.602249  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:28:34.602291  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.602313  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.602334  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.603145  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:28:34.622533  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:28:34.642167  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:28:34.661662  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:28:34.684982  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 21:28:34.703295  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:28:34.721710  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:28:34.739228  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:28:34.756359  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:28:34.773708  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:28:34.791360  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:28:34.809607  155675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:28:34.822659  155675 ssh_runner.go:195] Run: openssl version
	I1002 21:28:34.828896  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:28:34.837462  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.841707  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.841776  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.876686  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:28:34.885143  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:28:34.893940  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.897851  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.897917  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.932255  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:28:34.940703  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:28:34.949899  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.953722  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.953783  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.989786  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:28:34.998247  155675 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:28:35.002235  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:28:35.036665  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:28:35.070968  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:28:35.106690  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:28:35.154498  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:28:35.193796  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:28:35.228071  155675 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:28:35.228163  155675 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:28:35.228246  155675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:28:35.256219  155675 cri.go:89] found id: ""
	I1002 21:28:35.256288  155675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:28:35.264604  155675 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:28:35.264627  155675 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:28:35.264674  155675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:28:35.271961  155675 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:28:35.272339  155675 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:35.272429  155675 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "ha-798711" cluster setting kubeconfig missing "ha-798711" context setting]
	I1002 21:28:35.272674  155675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.273223  155675 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:28:35.273680  155675 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:28:35.273697  155675 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:28:35.273706  155675 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:28:35.273711  155675 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:28:35.273716  155675 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:28:35.273768  155675 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 21:28:35.274106  155675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:28:35.281708  155675 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 21:28:35.281757  155675 kubeadm.go:601] duration metric: took 17.1218ms to restartPrimaryControlPlane
	I1002 21:28:35.281768  155675 kubeadm.go:402] duration metric: took 53.709514ms to StartCluster
	I1002 21:28:35.281788  155675 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.281855  155675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:35.282359  155675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.282590  155675 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:28:35.282703  155675 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:28:35.282793  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:35.282811  155675 addons.go:69] Setting storage-provisioner=true in profile "ha-798711"
	I1002 21:28:35.282831  155675 addons.go:238] Setting addon storage-provisioner=true in "ha-798711"
	I1002 21:28:35.282837  155675 addons.go:69] Setting default-storageclass=true in profile "ha-798711"
	I1002 21:28:35.282853  155675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-798711"
	I1002 21:28:35.282867  155675 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:35.283211  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.283373  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.287818  155675 out.go:179] * Verifying Kubernetes components...
	I1002 21:28:35.289179  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:35.305536  155675 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:28:35.305848  155675 addons.go:238] Setting addon default-storageclass=true in "ha-798711"
	I1002 21:28:35.305892  155675 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:35.306218  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.306573  155675 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:28:35.307769  155675 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:28:35.307789  155675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:28:35.307839  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:35.330701  155675 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:35.330727  155675 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:28:35.330911  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:35.334724  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:35.351684  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:35.399040  155675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:28:35.412985  155675 node_ready.go:35] waiting up to 6m0s for node "ha-798711" to be "Ready" ...
	I1002 21:28:35.442600  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:28:35.460605  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:35.502524  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.502566  155675 retry.go:31] will retry after 185.764836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:35.517773  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.517809  155675 retry.go:31] will retry after 133.246336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.652188  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:35.688959  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:35.715291  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.715332  155675 retry.go:31] will retry after 306.166157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:35.759518  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.759549  155675 retry.go:31] will retry after 301.391679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.022497  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:36.061160  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:36.079961  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.080007  155675 retry.go:31] will retry after 697.847532ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:36.118232  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.118271  155675 retry.go:31] will retry after 395.582354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.514512  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:36.568051  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.568086  155675 retry.go:31] will retry after 646.007893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.778586  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:36.832650  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.832688  155675 retry.go:31] will retry after 716.06432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.214893  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:37.268191  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.268279  155675 retry.go:31] will retry after 854.849255ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:37.413941  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:37.549248  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:37.603971  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.604014  155675 retry.go:31] will retry after 1.344807605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.124286  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:38.177165  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.177199  155675 retry.go:31] will retry after 1.263429075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.949653  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:39.003395  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:39.003428  155675 retry.go:31] will retry after 2.765859651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:39.414384  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:39.441621  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:39.494342  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:39.494371  155675 retry.go:31] will retry after 2.952922772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:41.414500  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:41.769964  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:41.823729  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:41.823776  155675 retry.go:31] will retry after 2.930479483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:42.447772  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:42.501213  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:42.501266  155675 retry.go:31] will retry after 3.721393623s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:43.414622  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:44.755175  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:44.807949  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:44.807981  155675 retry.go:31] will retry after 4.46774792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:45.913827  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:46.223306  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:46.275912  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:46.275942  155675 retry.go:31] will retry after 9.165769414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:48.413715  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:49.276318  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:49.331953  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:49.331996  155675 retry.go:31] will retry after 7.553909482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:50.913554  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:53.413799  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:55.442725  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:55.495811  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:55.495844  155675 retry.go:31] will retry after 8.398663559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:55.913916  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:56.886337  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:56.938883  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:56.938912  155675 retry.go:31] will retry after 5.941880418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:58.414176  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:00.913767  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:02.881855  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:02.913856  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:02.936281  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:02.936310  155675 retry.go:31] will retry after 8.801429272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:03.895505  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:03.949396  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:03.949425  155675 retry.go:31] will retry after 8.280385033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:04.914589  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:07.413893  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:09.414585  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:11.738357  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:11.791944  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:11.791978  155675 retry.go:31] will retry after 20.07436133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:11.913506  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:12.230962  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:12.284322  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:12.284367  155675 retry.go:31] will retry after 31.198537936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:13.913570  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:15.913975  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:18.413914  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:20.913884  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:22.914461  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:25.414237  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:27.914518  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:30.414136  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:31.867242  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:31.921723  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:31.921774  155675 retry.go:31] will retry after 19.984076529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:32.913680  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:34.914116  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:36.914541  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:39.414546  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:41.914263  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:43.484108  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:43.536861  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:43.536898  155675 retry.go:31] will retry after 27.176524941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:44.413860  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:46.414476  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:48.914309  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:51.414076  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:51.906696  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:51.960820  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:51.960952  155675 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 21:29:53.414245  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:55.913983  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:58.413904  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:00.913802  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:02.914585  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:05.414592  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:07.914259  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:10.413676  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:30:10.714113  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:30:10.768467  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:30:10.768623  155675 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 21:30:10.771151  155675 out.go:179] * Enabled addons: 
	I1002 21:30:10.772416  155675 addons.go:514] duration metric: took 1m35.489723071s for enable addons: enabled=[]
	W1002 21:30:12.413723  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:14.414457  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:16.913965  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:19.413730  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:21.414406  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:23.913870  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:26.413629  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:28.414046  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:30.414474  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:32.914093  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:35.414296  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:37.914285  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:39.914538  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:42.413582  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:44.413882  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:46.414229  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:48.913587  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:50.914483  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:53.413612  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:55.413685  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:57.414468  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:59.913623  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:02.414537  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:04.913937  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:06.914435  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:09.414047  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:11.913920  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:13.914248  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:15.914508  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:18.413878  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:20.913663  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:23.413996  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:25.414227  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:27.414386  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:29.414601  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:31.913548  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:33.913846  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:35.913989  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:38.414223  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:40.414407  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:42.914396  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:45.413639  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:47.913627  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:49.913793  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:52.413722  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:54.414032  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:56.414437  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:58.913898  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:01.413677  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:03.413857  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:05.414152  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:07.414277  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:09.414527  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:11.914491  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:14.413681  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:16.413854  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:18.414029  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:20.913949  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:22.914491  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:25.413701  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:27.414620  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:29.914027  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:32.414041  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:34.414502  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:36.914551  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:39.413809  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:41.913725  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:43.913943  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:45.914242  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:47.914422  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:50.413682  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:52.913674  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:54.913997  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:56.914580  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:59.413963  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:01.414035  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:03.414188  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:05.913578  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:07.913616  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:09.913947  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:12.413832  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:14.413971  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:16.414484  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:18.913973  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:21.413936  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:23.414140  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:25.414411  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:27.913573  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:29.913817  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:32.413645  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:34.413963  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:36.414473  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:38.913857  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:41.413732  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:43.413888  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:45.913712  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:48.413850  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:50.913725  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:53.413931  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:55.414296  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:57.414522  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:59.913776  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:02.413563  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:04.413718  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:06.414028  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:08.414119  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:10.914009  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:13.414193  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:15.414496  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:17.913661  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:19.913874  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:22.413686  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:24.413997  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:26.414507  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:28.913912  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:31.414590  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:33.913730  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:34:35.413657  155675 node_ready.go:38] duration metric: took 6m0.000618353s for node "ha-798711" to be "Ready" ...
	I1002 21:34:35.416036  155675 out.go:203] 
	W1002 21:34:35.417586  155675 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 21:34:35.417604  155675 out.go:285] * 
	* 
	W1002 21:34:35.419340  155675 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:34:35.420515  155675 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-798711 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 155870,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:28:28.629176332Z",
	            "FinishedAt": "2025-10-02T21:28:27.30406005Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6709695e88674e10e353a7a1e6a5f597599db0f8dff17de25e6a675a5a052e8",
	            "SandboxKey": "/var/run/docker/netns/e6709695e886",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:b8:bb:5f:71:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "d6008f1fd1a1f997c0b42aeef656e8d86f4f11d2951f29e56ff47db4f71a71ea",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 2 (303.197539ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node add --alsologtostderr -v 5                                                    │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node stop m02 --alsologtostderr -v 5                                               │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node start m02 --alsologtostderr -v 5                                              │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │ 02 Oct 25 21:22 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5                                           │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ node    │ ha-798711 node delete m03 --alsologtostderr -v 5                                             │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │ 02 Oct 25 21:28 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:28:28
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:28:28.403003  155675 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:28:28.403116  155675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.403125  155675 out.go:374] Setting ErrFile to fd 2...
	I1002 21:28:28.403129  155675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.403315  155675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:28:28.403776  155675 out.go:368] Setting JSON to false
	I1002 21:28:28.404642  155675 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11449,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:28:28.404726  155675 start.go:140] virtualization: kvm guest
	I1002 21:28:28.406949  155675 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:28:28.408440  155675 notify.go:220] Checking for updates...
	I1002 21:28:28.408467  155675 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:28:28.409938  155675 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:28:28.411145  155675 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:28.412417  155675 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:28:28.413758  155675 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:28:28.415028  155675 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:28:28.416927  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:28.417596  155675 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:28:28.441148  155675 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:28:28.441315  155675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:28.496626  155675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:28:28.486980606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:28.496755  155675 docker.go:318] overlay module found
	I1002 21:28:28.498705  155675 out.go:179] * Using the docker driver based on existing profile
	I1002 21:28:28.499971  155675 start.go:304] selected driver: docker
	I1002 21:28:28.499988  155675 start.go:924] validating driver "docker" against &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:28:28.500076  155675 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:28:28.500152  155675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:28.554609  155675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:28:28.545101226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:28.555297  155675 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:28:28.555338  155675 cni.go:84] Creating CNI manager for ""
	I1002 21:28:28.555400  155675 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:28:28.555463  155675 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 21:28:28.557542  155675 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:28:28.558794  155675 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:28:28.559993  155675 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:28:28.561213  155675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:28:28.561259  155675 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:28:28.561268  155675 cache.go:58] Caching tarball of preloaded images
	I1002 21:28:28.561312  155675 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:28:28.561377  155675 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:28:28.561394  155675 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:28:28.561531  155675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:28:28.581862  155675 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:28:28.581882  155675 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:28:28.581898  155675 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:28:28.581920  155675 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:28:28.581974  155675 start.go:364] duration metric: took 36.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:28:28.581991  155675 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:28:28.581998  155675 fix.go:54] fixHost starting: 
	I1002 21:28:28.582193  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:28.600330  155675 fix.go:112] recreateIfNeeded on ha-798711: state=Stopped err=<nil>
	W1002 21:28:28.600370  155675 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:28:28.602558  155675 out.go:252] * Restarting existing docker container for "ha-798711" ...
	I1002 21:28:28.602629  155675 cli_runner.go:164] Run: docker start ha-798711
	I1002 21:28:28.838867  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:28.857507  155675 kic.go:430] container "ha-798711" state is running.
	I1002 21:28:28.857953  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:28.875695  155675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:28:28.875935  155675 machine.go:93] provisionDockerMachine start ...
	I1002 21:28:28.876007  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:28.894590  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:28.894848  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:28.894862  155675 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:28:28.895489  155675 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32860->127.0.0.1:32793: read: connection reset by peer
	I1002 21:28:32.042146  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:28:32.042175  155675 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:28:32.042247  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.060169  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.060387  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.060400  155675 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:28:32.214017  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:28:32.214104  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.232113  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.232342  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.232359  155675 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:28:32.376535  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:28:32.376566  155675 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:28:32.376584  155675 ubuntu.go:190] setting up certificates
	I1002 21:28:32.376592  155675 provision.go:84] configureAuth start
	I1002 21:28:32.376642  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:32.396020  155675 provision.go:143] copyHostCerts
	I1002 21:28:32.396062  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:28:32.396100  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:28:32.396116  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:28:32.396183  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:28:32.396277  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:28:32.396305  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:28:32.396320  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:28:32.396353  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:28:32.396398  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:28:32.396415  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:28:32.396419  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:28:32.396441  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:28:32.396489  155675 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:28:32.512217  155675 provision.go:177] copyRemoteCerts
	I1002 21:28:32.512275  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:28:32.512317  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.530566  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:32.631941  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:28:32.631999  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:28:32.649350  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:28:32.649401  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:28:32.666579  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:28:32.666640  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:28:32.684729  155675 provision.go:87] duration metric: took 308.118918ms to configureAuth
	I1002 21:28:32.684867  155675 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:28:32.685043  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:32.685148  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.703210  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.703437  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.703461  155675 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:28:32.962015  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:28:32.962052  155675 machine.go:96] duration metric: took 4.086102415s to provisionDockerMachine
	I1002 21:28:32.962066  155675 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:28:32.962081  155675 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:28:32.962161  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:28:32.962205  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.980349  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.082626  155675 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:28:33.086352  155675 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:28:33.086384  155675 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:28:33.086398  155675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:28:33.086455  155675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:28:33.086573  155675 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:28:33.086598  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:28:33.086723  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:28:33.094470  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:28:33.112480  155675 start.go:296] duration metric: took 150.396395ms for postStartSetup
	I1002 21:28:33.112566  155675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:28:33.112609  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.130086  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.230100  155675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:28:33.235048  155675 fix.go:56] duration metric: took 4.65304118s for fixHost
	I1002 21:28:33.235074  155675 start.go:83] releasing machines lock for "ha-798711", held for 4.653089722s
	I1002 21:28:33.235148  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:33.253218  155675 ssh_runner.go:195] Run: cat /version.json
	I1002 21:28:33.253241  155675 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:28:33.253280  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.253330  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.273049  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.273536  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.445879  155675 ssh_runner.go:195] Run: systemctl --version
	I1002 21:28:33.452886  155675 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:28:33.488518  155675 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:28:33.493393  155675 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:28:33.493458  155675 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:28:33.501643  155675 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:28:33.501669  155675 start.go:495] detecting cgroup driver to use...
	I1002 21:28:33.501700  155675 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:28:33.501756  155675 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:28:33.515853  155675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:28:33.528213  155675 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:28:33.528272  155675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:28:33.542828  155675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:28:33.556143  155675 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:28:33.634827  155675 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:28:33.716388  155675 docker.go:234] disabling docker service ...
	I1002 21:28:33.716495  155675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:28:33.731194  155675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:28:33.744342  155675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:28:33.823830  155675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:28:33.905576  155675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:28:33.918701  155675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:28:33.933267  155675 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:28:33.933327  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.942732  155675 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:28:33.942809  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.951932  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.961276  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.970164  155675 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:28:33.978507  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.987369  155675 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.995524  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:34.004102  155675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:28:34.011220  155675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:28:34.018342  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:34.095886  155675 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:28:34.203604  155675 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:28:34.203665  155675 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:28:34.207612  155675 start.go:563] Will wait 60s for crictl version
	I1002 21:28:34.207675  155675 ssh_runner.go:195] Run: which crictl
	I1002 21:28:34.211395  155675 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:28:34.235415  155675 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:28:34.235492  155675 ssh_runner.go:195] Run: crio --version
	I1002 21:28:34.263418  155675 ssh_runner.go:195] Run: crio --version
	I1002 21:28:34.293048  155675 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:28:34.294508  155675 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:28:34.312107  155675 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:28:34.316513  155675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:28:34.327623  155675 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:28:34.327797  155675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:28:34.327859  155675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:28:34.360824  155675 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:28:34.360849  155675 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:28:34.360901  155675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:28:34.388164  155675 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:28:34.388188  155675 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:28:34.388197  155675 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:28:34.388287  155675 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:28:34.388349  155675 ssh_runner.go:195] Run: crio config
	I1002 21:28:34.434047  155675 cni.go:84] Creating CNI manager for ""
	I1002 21:28:34.434070  155675 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:28:34.434089  155675 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:28:34.434108  155675 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:28:34.434226  155675 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:28:34.434286  155675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:28:34.442337  155675 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:28:34.442397  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:28:34.450473  155675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:28:34.462634  155675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:28:34.474595  155675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:28:34.486784  155675 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:28:34.490250  155675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:28:34.499967  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:34.576427  155675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:28:34.601305  155675 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:28:34.601329  155675 certs.go:195] generating shared ca certs ...
	I1002 21:28:34.601346  155675 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:34.601512  155675 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:28:34.601558  155675 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:28:34.601570  155675 certs.go:257] generating profile certs ...
	I1002 21:28:34.601674  155675 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:28:34.601761  155675 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b
	I1002 21:28:34.601817  155675 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:28:34.601830  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:28:34.601853  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:28:34.601878  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:28:34.601897  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:28:34.601915  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:28:34.601943  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:28:34.601963  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:28:34.601979  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:28:34.602044  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:28:34.602085  155675 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:28:34.602098  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:28:34.602132  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:28:34.602161  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:28:34.602187  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:28:34.602249  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:28:34.602291  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.602313  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.602334  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.603145  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:28:34.622533  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:28:34.642167  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:28:34.661662  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:28:34.684982  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 21:28:34.703295  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:28:34.721710  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:28:34.739228  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:28:34.756359  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:28:34.773708  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:28:34.791360  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:28:34.809607  155675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:28:34.822659  155675 ssh_runner.go:195] Run: openssl version
	I1002 21:28:34.828896  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:28:34.837462  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.841707  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.841776  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.876686  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:28:34.885143  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:28:34.893940  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.897851  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.897917  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.932255  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:28:34.940703  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:28:34.949899  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.953722  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.953783  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.989786  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:28:34.998247  155675 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:28:35.002235  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:28:35.036665  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:28:35.070968  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:28:35.106690  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:28:35.154498  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:28:35.193796  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:28:35.228071  155675 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:28:35.228163  155675 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:28:35.228246  155675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:28:35.256219  155675 cri.go:89] found id: ""
	I1002 21:28:35.256288  155675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:28:35.264604  155675 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:28:35.264627  155675 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:28:35.264674  155675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:28:35.271961  155675 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:28:35.272339  155675 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:35.272429  155675 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "ha-798711" cluster setting kubeconfig missing "ha-798711" context setting]
	I1002 21:28:35.272674  155675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.273223  155675 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:28:35.273680  155675 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:28:35.273697  155675 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:28:35.273706  155675 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:28:35.273711  155675 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:28:35.273716  155675 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:28:35.273768  155675 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 21:28:35.274106  155675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:28:35.281708  155675 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 21:28:35.281757  155675 kubeadm.go:601] duration metric: took 17.1218ms to restartPrimaryControlPlane
	I1002 21:28:35.281768  155675 kubeadm.go:402] duration metric: took 53.709514ms to StartCluster
	I1002 21:28:35.281788  155675 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.281855  155675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:35.282359  155675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.282590  155675 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:28:35.282703  155675 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:28:35.282793  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:35.282811  155675 addons.go:69] Setting storage-provisioner=true in profile "ha-798711"
	I1002 21:28:35.282831  155675 addons.go:238] Setting addon storage-provisioner=true in "ha-798711"
	I1002 21:28:35.282837  155675 addons.go:69] Setting default-storageclass=true in profile "ha-798711"
	I1002 21:28:35.282853  155675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-798711"
	I1002 21:28:35.282867  155675 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:35.283211  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.283373  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.287818  155675 out.go:179] * Verifying Kubernetes components...
	I1002 21:28:35.289179  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:35.305536  155675 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:28:35.305848  155675 addons.go:238] Setting addon default-storageclass=true in "ha-798711"
	I1002 21:28:35.305892  155675 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:35.306218  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.306573  155675 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:28:35.307769  155675 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:28:35.307789  155675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:28:35.307839  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:35.330701  155675 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:35.330727  155675 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:28:35.330911  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:35.334724  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:35.351684  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:35.399040  155675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:28:35.412985  155675 node_ready.go:35] waiting up to 6m0s for node "ha-798711" to be "Ready" ...
	I1002 21:28:35.442600  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:28:35.460605  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:35.502524  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.502566  155675 retry.go:31] will retry after 185.764836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:35.517773  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.517809  155675 retry.go:31] will retry after 133.246336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.652188  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:35.688959  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:35.715291  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.715332  155675 retry.go:31] will retry after 306.166157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:35.759518  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.759549  155675 retry.go:31] will retry after 301.391679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.022497  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:36.061160  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:36.079961  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.080007  155675 retry.go:31] will retry after 697.847532ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:36.118232  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.118271  155675 retry.go:31] will retry after 395.582354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.514512  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:36.568051  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.568086  155675 retry.go:31] will retry after 646.007893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.778586  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:36.832650  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.832688  155675 retry.go:31] will retry after 716.06432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.214893  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:37.268191  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.268279  155675 retry.go:31] will retry after 854.849255ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:37.413941  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:37.549248  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:37.603971  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.604014  155675 retry.go:31] will retry after 1.344807605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.124286  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:38.177165  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.177199  155675 retry.go:31] will retry after 1.263429075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.949653  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:39.003395  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:39.003428  155675 retry.go:31] will retry after 2.765859651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:39.414384  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:39.441621  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:39.494342  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:39.494371  155675 retry.go:31] will retry after 2.952922772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:41.414500  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:41.769964  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:41.823729  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:41.823776  155675 retry.go:31] will retry after 2.930479483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:42.447772  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:42.501213  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:42.501266  155675 retry.go:31] will retry after 3.721393623s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:43.414622  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:44.755175  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:44.807949  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:44.807981  155675 retry.go:31] will retry after 4.46774792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:45.913827  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:46.223306  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:46.275912  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:46.275942  155675 retry.go:31] will retry after 9.165769414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:48.413715  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:49.276318  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:49.331953  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:49.331996  155675 retry.go:31] will retry after 7.553909482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:50.913554  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:53.413799  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:55.442725  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:55.495811  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:55.495844  155675 retry.go:31] will retry after 8.398663559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:55.913916  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:56.886337  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:56.938883  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:56.938912  155675 retry.go:31] will retry after 5.941880418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:58.414176  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:00.913767  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:02.881855  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:02.913856  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:02.936281  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:02.936310  155675 retry.go:31] will retry after 8.801429272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:03.895505  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:03.949396  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:03.949425  155675 retry.go:31] will retry after 8.280385033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:04.914589  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:07.413893  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:09.414585  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:11.738357  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:11.791944  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:11.791978  155675 retry.go:31] will retry after 20.07436133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:11.913506  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:12.230962  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:12.284322  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:12.284367  155675 retry.go:31] will retry after 31.198537936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:13.913570  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:15.913975  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:18.413914  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:20.913884  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:22.914461  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:25.414237  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:27.914518  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:30.414136  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:31.867242  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:31.921723  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:31.921774  155675 retry.go:31] will retry after 19.984076529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:32.913680  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:34.914116  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:36.914541  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:39.414546  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:41.914263  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:43.484108  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:43.536861  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:43.536898  155675 retry.go:31] will retry after 27.176524941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:44.413860  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:46.414476  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:48.914309  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:51.414076  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:51.906696  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:51.960820  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:51.960952  155675 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 21:29:53.414245  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:55.913983  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:58.413904  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:00.913802  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:02.914585  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:05.414592  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:07.914259  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:10.413676  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:30:10.714113  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:30:10.768467  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:30:10.768623  155675 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 21:30:10.771151  155675 out.go:179] * Enabled addons: 
	I1002 21:30:10.772416  155675 addons.go:514] duration metric: took 1m35.489723071s for enable addons: enabled=[]
	W1002 21:30:12.413723  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:14.414457  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:16.913965  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:19.413730  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:21.414406  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:23.913870  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:26.413629  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:28.414046  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:30.414474  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:32.914093  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:35.414296  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:37.914285  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:39.914538  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:42.413582  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:44.413882  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:46.414229  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:48.913587  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:50.914483  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:53.413612  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:55.413685  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:57.414468  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:59.913623  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:02.414537  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:04.913937  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:06.914435  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:09.414047  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:11.913920  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:13.914248  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:15.914508  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:18.413878  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:20.913663  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:23.413996  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:25.414227  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:27.414386  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:29.414601  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:31.913548  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:33.913846  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:35.913989  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:38.414223  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:40.414407  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:42.914396  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:45.413639  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:47.913627  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:49.913793  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:52.413722  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:54.414032  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:56.414437  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:58.913898  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:01.413677  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:03.413857  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:05.414152  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:07.414277  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:09.414527  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:11.914491  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:14.413681  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:16.413854  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:18.414029  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:20.913949  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:22.914491  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:25.413701  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:27.414620  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:29.914027  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:32.414041  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:34.414502  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:36.914551  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:39.413809  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:41.913725  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:43.913943  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:45.914242  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:47.914422  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:50.413682  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:52.913674  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:54.913997  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:56.914580  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:59.413963  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:01.414035  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:03.414188  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:05.913578  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:07.913616  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:09.913947  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:12.413832  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:14.413971  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:16.414484  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:18.913973  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:21.413936  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:23.414140  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:25.414411  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:27.913573  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:29.913817  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:32.413645  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:34.413963  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:36.414473  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:38.913857  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:41.413732  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:43.413888  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:45.913712  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:48.413850  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:50.913725  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:53.413931  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:55.414296  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:57.414522  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:59.913776  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:02.413563  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:04.413718  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:06.414028  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:08.414119  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:10.914009  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:13.414193  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:15.414496  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:17.913661  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:19.913874  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:22.413686  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:24.413997  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:26.414507  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:28.913912  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:31.414590  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:33.913730  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:34:35.413657  155675 node_ready.go:38] duration metric: took 6m0.000618353s for node "ha-798711" to be "Ready" ...
	I1002 21:34:35.416036  155675 out.go:203] 
	W1002 21:34:35.417586  155675 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 21:34:35.417604  155675 out.go:285] * 
	W1002 21:34:35.419340  155675 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:34:35.420515  155675 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:34:24 ha-798711 crio[519]: time="2025-10-02T21:34:24.720667796Z" level=info msg="createCtr: deleting container fed7957e391d22ff1b00c20bf39a2629000d28f6ef8e95fd7a1cc105294d4cf9 from storage" id=aaa985b0-fc22-414c-b675-b9f570799621 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:24 ha-798711 crio[519]: time="2025-10-02T21:34:24.722375917Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=4413d481-dcd8-40f8-a194-faad19686e63 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:24 ha-798711 crio[519]: time="2025-10-02T21:34:24.72271947Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-798711_kube-system_4a40991d7a1715abba4b4bde50171ddc_0" id=aaa985b0-fc22-414c-b675-b9f570799621 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.692896144Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e092e879-fb2b-4560-a09a-806f8c083612 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.693827784Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=07d8f06d-5d02-4f43-8d50-922b2fad57f8 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.694857175Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-798711/kube-controller-manager" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.695091123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.698533022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.698951966Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.713793668Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.715185541Z" level=info msg="createCtr: deleting container ID 1f61dc05309357d6d95e8d08d0ee556024b814a437126f0a540e3a1c3084ef48 from idIndex" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.71522329Z" level=info msg="createCtr: removing container 1f61dc05309357d6d95e8d08d0ee556024b814a437126f0a540e3a1c3084ef48" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.715264586Z" level=info msg="createCtr: deleting container 1f61dc05309357d6d95e8d08d0ee556024b814a437126f0a540e3a1c3084ef48 from storage" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.717552516Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.692941794Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3f439862-6d29-437c-85d6-7d524d8b447f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.693856663Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=49ad3c8f-3a68-4392-9a21-40f72e2ac9f9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.694777439Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.694993707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.6985454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.698958295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.717136088Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.718679137Z" level=info msg="createCtr: deleting container ID 4cb3048da1bad080ed093015bbfd619d7bdbdf72d7cbe53a62b050a2459faeb3 from idIndex" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.718727129Z" level=info msg="createCtr: removing container 4cb3048da1bad080ed093015bbfd619d7bdbdf72d7cbe53a62b050a2459faeb3" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.718795592Z" level=info msg="createCtr: deleting container 4cb3048da1bad080ed093015bbfd619d7bdbdf72d7cbe53a62b050a2459faeb3 from storage" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.721164251Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:34:36.354561    1999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:36.355144    1999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:36.356770    1999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:36.357164    1999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:36.358414    1999 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:34:36 up  3:16,  0 user,  load average: 0.07, 0.09, 0.09
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:34:24 ha-798711 kubelet[669]:  > logger="UnhandledError"
	Oct 02 21:34:24 ha-798711 kubelet[669]: E1002 21:34:24.724146     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-798711" podUID="4a40991d7a1715abba4b4bde50171ddc"
	Oct 02 21:34:26 ha-798711 kubelet[669]: E1002 21:34:26.360862     669 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac9d380df39a3  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:28:34.678995363 +0000 UTC m=+0.075563829,LastTimestamp:2025-10-02 21:28:34.678995363 +0000 UTC m=+0.075563829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.692416     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.718016     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:34:27 ha-798711 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:27 ha-798711 kubelet[669]:  > podSandboxID="26c7d26dc814a6069dd754062dbc6b80b5e77155b8bcfd144b82a577d7aa24f0"
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.718124     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:34:27 ha-798711 kubelet[669]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:27 ha-798711 kubelet[669]:  > logger="UnhandledError"
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.718154     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.692460     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.721530     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:34:28 ha-798711 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:28 ha-798711 kubelet[669]:  > podSandboxID="03e68d2f04bf8c206661aee5adee3f6f82f0584fb4c70614b572bca6f0516412"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.721638     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:34:28 ha-798711 kubelet[669]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:28 ha-798711 kubelet[669]:  > logger="UnhandledError"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.721683     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:34:30 ha-798711 kubelet[669]: E1002 21:34:30.326592     669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:34:30 ha-798711 kubelet[669]: I1002 21:34:30.497709     669 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:34:30 ha-798711 kubelet[669]: E1002 21:34:30.498117     669 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:34:34 ha-798711 kubelet[669]: E1002 21:34:34.706002     669 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-798711\" not found"
	Oct 02 21:34:36 ha-798711 kubelet[669]: E1002 21:34:36.355106     669 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-798711&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 21:34:36 ha-798711 kubelet[669]: E1002 21:34:36.361390     669 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac9d380df39a3  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:28:34.678995363 +0000 UTC m=+0.075563829,LastTimestamp:2025-10-02 21:28:34.678995363 +0000 UTC m=+0.075563829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 2 (298.296962ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (368.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-798711" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-798711\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-798711\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-798711\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":nul
l,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list
--output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 155870,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:28:28.629176332Z",
	            "FinishedAt": "2025-10-02T21:28:27.30406005Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6709695e88674e10e353a7a1e6a5f597599db0f8dff17de25e6a675a5a052e8",
	            "SandboxKey": "/var/run/docker/netns/e6709695e886",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:b8:bb:5f:71:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "d6008f1fd1a1f997c0b42aeef656e8d86f4f11d2951f29e56ff47db4f71a71ea",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 2 (292.686628ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node add --alsologtostderr -v 5                                                    │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node stop m02 --alsologtostderr -v 5                                               │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node start m02 --alsologtostderr -v 5                                              │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │ 02 Oct 25 21:22 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5                                           │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ node    │ ha-798711 node delete m03 --alsologtostderr -v 5                                             │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │ 02 Oct 25 21:28 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:28:28
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:28:28.403003  155675 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:28:28.403116  155675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.403125  155675 out.go:374] Setting ErrFile to fd 2...
	I1002 21:28:28.403129  155675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.403315  155675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:28:28.403776  155675 out.go:368] Setting JSON to false
	I1002 21:28:28.404642  155675 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11449,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:28:28.404726  155675 start.go:140] virtualization: kvm guest
	I1002 21:28:28.406949  155675 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:28:28.408440  155675 notify.go:220] Checking for updates...
	I1002 21:28:28.408467  155675 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:28:28.409938  155675 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:28:28.411145  155675 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:28.412417  155675 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:28:28.413758  155675 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:28:28.415028  155675 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:28:28.416927  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:28.417596  155675 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:28:28.441148  155675 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:28:28.441315  155675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:28.496626  155675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:28:28.486980606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:28.496755  155675 docker.go:318] overlay module found
	I1002 21:28:28.498705  155675 out.go:179] * Using the docker driver based on existing profile
	I1002 21:28:28.499971  155675 start.go:304] selected driver: docker
	I1002 21:28:28.499988  155675 start.go:924] validating driver "docker" against &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:28:28.500076  155675 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:28:28.500152  155675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:28.554609  155675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:28:28.545101226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:28.555297  155675 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:28:28.555338  155675 cni.go:84] Creating CNI manager for ""
	I1002 21:28:28.555400  155675 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:28:28.555463  155675 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 21:28:28.557542  155675 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:28:28.558794  155675 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:28:28.559993  155675 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:28:28.561213  155675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:28:28.561259  155675 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:28:28.561268  155675 cache.go:58] Caching tarball of preloaded images
	I1002 21:28:28.561312  155675 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:28:28.561377  155675 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:28:28.561394  155675 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:28:28.561531  155675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:28:28.581862  155675 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:28:28.581882  155675 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:28:28.581898  155675 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:28:28.581920  155675 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:28:28.581974  155675 start.go:364] duration metric: took 36.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:28:28.581991  155675 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:28:28.581998  155675 fix.go:54] fixHost starting: 
	I1002 21:28:28.582193  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:28.600330  155675 fix.go:112] recreateIfNeeded on ha-798711: state=Stopped err=<nil>
	W1002 21:28:28.600370  155675 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:28:28.602558  155675 out.go:252] * Restarting existing docker container for "ha-798711" ...
	I1002 21:28:28.602629  155675 cli_runner.go:164] Run: docker start ha-798711
	I1002 21:28:28.838867  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:28.857507  155675 kic.go:430] container "ha-798711" state is running.
	I1002 21:28:28.857953  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:28.875695  155675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:28:28.875935  155675 machine.go:93] provisionDockerMachine start ...
	I1002 21:28:28.876007  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:28.894590  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:28.894848  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:28.894862  155675 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:28:28.895489  155675 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32860->127.0.0.1:32793: read: connection reset by peer
	I1002 21:28:32.042146  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:28:32.042175  155675 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:28:32.042247  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.060169  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.060387  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.060400  155675 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:28:32.214017  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:28:32.214104  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.232113  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.232342  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.232359  155675 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:28:32.376535  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:28:32.376566  155675 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:28:32.376584  155675 ubuntu.go:190] setting up certificates
	I1002 21:28:32.376592  155675 provision.go:84] configureAuth start
	I1002 21:28:32.376642  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:32.396020  155675 provision.go:143] copyHostCerts
	I1002 21:28:32.396062  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:28:32.396100  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:28:32.396116  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:28:32.396183  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:28:32.396277  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:28:32.396305  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:28:32.396320  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:28:32.396353  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:28:32.396398  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:28:32.396415  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:28:32.396419  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:28:32.396441  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:28:32.396489  155675 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:28:32.512217  155675 provision.go:177] copyRemoteCerts
	I1002 21:28:32.512275  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:28:32.512317  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.530566  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:32.631941  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:28:32.631999  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:28:32.649350  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:28:32.649401  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:28:32.666579  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:28:32.666640  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:28:32.684729  155675 provision.go:87] duration metric: took 308.118918ms to configureAuth
	I1002 21:28:32.684867  155675 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:28:32.685043  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:32.685148  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.703210  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.703437  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.703461  155675 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:28:32.962015  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:28:32.962052  155675 machine.go:96] duration metric: took 4.086102415s to provisionDockerMachine
	I1002 21:28:32.962066  155675 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:28:32.962081  155675 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:28:32.962161  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:28:32.962205  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.980349  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.082626  155675 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:28:33.086352  155675 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:28:33.086384  155675 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:28:33.086398  155675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:28:33.086455  155675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:28:33.086573  155675 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:28:33.086598  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:28:33.086723  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:28:33.094470  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:28:33.112480  155675 start.go:296] duration metric: took 150.396395ms for postStartSetup
	I1002 21:28:33.112566  155675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:28:33.112609  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.130086  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.230100  155675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:28:33.235048  155675 fix.go:56] duration metric: took 4.65304118s for fixHost
	I1002 21:28:33.235074  155675 start.go:83] releasing machines lock for "ha-798711", held for 4.653089722s
	I1002 21:28:33.235148  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:33.253218  155675 ssh_runner.go:195] Run: cat /version.json
	I1002 21:28:33.253241  155675 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:28:33.253280  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.253330  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.273049  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.273536  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.445879  155675 ssh_runner.go:195] Run: systemctl --version
	I1002 21:28:33.452886  155675 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:28:33.488518  155675 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:28:33.493393  155675 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:28:33.493458  155675 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:28:33.501643  155675 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:28:33.501669  155675 start.go:495] detecting cgroup driver to use...
	I1002 21:28:33.501700  155675 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:28:33.501756  155675 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:28:33.515853  155675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:28:33.528213  155675 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:28:33.528272  155675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:28:33.542828  155675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:28:33.556143  155675 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:28:33.634827  155675 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:28:33.716388  155675 docker.go:234] disabling docker service ...
	I1002 21:28:33.716495  155675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:28:33.731194  155675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:28:33.744342  155675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:28:33.823830  155675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:28:33.905576  155675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:28:33.918701  155675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:28:33.933267  155675 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:28:33.933327  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.942732  155675 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:28:33.942809  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.951932  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.961276  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.970164  155675 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:28:33.978507  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.987369  155675 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.995524  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:34.004102  155675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:28:34.011220  155675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:28:34.018342  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:34.095886  155675 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:28:34.203604  155675 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:28:34.203665  155675 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:28:34.207612  155675 start.go:563] Will wait 60s for crictl version
	I1002 21:28:34.207675  155675 ssh_runner.go:195] Run: which crictl
	I1002 21:28:34.211395  155675 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:28:34.235415  155675 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:28:34.235492  155675 ssh_runner.go:195] Run: crio --version
	I1002 21:28:34.263418  155675 ssh_runner.go:195] Run: crio --version
	I1002 21:28:34.293048  155675 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:28:34.294508  155675 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:28:34.312107  155675 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:28:34.316513  155675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:28:34.327623  155675 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:28:34.327797  155675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:28:34.327859  155675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:28:34.360824  155675 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:28:34.360849  155675 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:28:34.360901  155675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:28:34.388164  155675 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:28:34.388188  155675 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:28:34.388197  155675 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:28:34.388287  155675 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:28:34.388349  155675 ssh_runner.go:195] Run: crio config
	I1002 21:28:34.434047  155675 cni.go:84] Creating CNI manager for ""
	I1002 21:28:34.434070  155675 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:28:34.434089  155675 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:28:34.434108  155675 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:28:34.434226  155675 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:28:34.434286  155675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:28:34.442337  155675 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:28:34.442397  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:28:34.450473  155675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:28:34.462634  155675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:28:34.474595  155675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:28:34.486784  155675 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:28:34.490250  155675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:28:34.499967  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:34.576427  155675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:28:34.601305  155675 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:28:34.601329  155675 certs.go:195] generating shared ca certs ...
	I1002 21:28:34.601346  155675 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:34.601512  155675 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:28:34.601558  155675 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:28:34.601570  155675 certs.go:257] generating profile certs ...
	I1002 21:28:34.601674  155675 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:28:34.601761  155675 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b
	I1002 21:28:34.601817  155675 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:28:34.601830  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:28:34.601853  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:28:34.601878  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:28:34.601897  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:28:34.601915  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:28:34.601943  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:28:34.601963  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:28:34.601979  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:28:34.602044  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:28:34.602085  155675 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:28:34.602098  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:28:34.602132  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:28:34.602161  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:28:34.602187  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:28:34.602249  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:28:34.602291  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.602313  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.602334  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.603145  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:28:34.622533  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:28:34.642167  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:28:34.661662  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:28:34.684982  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 21:28:34.703295  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:28:34.721710  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:28:34.739228  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:28:34.756359  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:28:34.773708  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:28:34.791360  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:28:34.809607  155675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:28:34.822659  155675 ssh_runner.go:195] Run: openssl version
	I1002 21:28:34.828896  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:28:34.837462  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.841707  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.841776  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.876686  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:28:34.885143  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:28:34.893940  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.897851  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.897917  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.932255  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:28:34.940703  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:28:34.949899  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.953722  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.953783  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.989786  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:28:34.998247  155675 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:28:35.002235  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:28:35.036665  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:28:35.070968  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:28:35.106690  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:28:35.154498  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:28:35.193796  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:28:35.228071  155675 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:28:35.228163  155675 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:28:35.228246  155675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:28:35.256219  155675 cri.go:89] found id: ""
	I1002 21:28:35.256288  155675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:28:35.264604  155675 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:28:35.264627  155675 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:28:35.264674  155675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:28:35.271961  155675 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:28:35.272339  155675 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:35.272429  155675 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "ha-798711" cluster setting kubeconfig missing "ha-798711" context setting]
	I1002 21:28:35.272674  155675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.273223  155675 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:28:35.273680  155675 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:28:35.273697  155675 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:28:35.273706  155675 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:28:35.273711  155675 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:28:35.273716  155675 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:28:35.273768  155675 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 21:28:35.274106  155675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:28:35.281708  155675 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 21:28:35.281757  155675 kubeadm.go:601] duration metric: took 17.1218ms to restartPrimaryControlPlane
	I1002 21:28:35.281768  155675 kubeadm.go:402] duration metric: took 53.709514ms to StartCluster
	I1002 21:28:35.281788  155675 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.281855  155675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:35.282359  155675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.282590  155675 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:28:35.282703  155675 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:28:35.282793  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:35.282811  155675 addons.go:69] Setting storage-provisioner=true in profile "ha-798711"
	I1002 21:28:35.282831  155675 addons.go:238] Setting addon storage-provisioner=true in "ha-798711"
	I1002 21:28:35.282837  155675 addons.go:69] Setting default-storageclass=true in profile "ha-798711"
	I1002 21:28:35.282853  155675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-798711"
	I1002 21:28:35.282867  155675 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:35.283211  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.283373  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.287818  155675 out.go:179] * Verifying Kubernetes components...
	I1002 21:28:35.289179  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:35.305536  155675 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:28:35.305848  155675 addons.go:238] Setting addon default-storageclass=true in "ha-798711"
	I1002 21:28:35.305892  155675 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:35.306218  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.306573  155675 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:28:35.307769  155675 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:28:35.307789  155675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:28:35.307839  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:35.330701  155675 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:35.330727  155675 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:28:35.330911  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:35.334724  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:35.351684  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:35.399040  155675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:28:35.412985  155675 node_ready.go:35] waiting up to 6m0s for node "ha-798711" to be "Ready" ...
	I1002 21:28:35.442600  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:28:35.460605  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:35.502524  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.502566  155675 retry.go:31] will retry after 185.764836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:35.517773  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.517809  155675 retry.go:31] will retry after 133.246336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.652188  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:35.688959  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:35.715291  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.715332  155675 retry.go:31] will retry after 306.166157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:35.759518  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.759549  155675 retry.go:31] will retry after 301.391679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.022497  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:36.061160  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:36.079961  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.080007  155675 retry.go:31] will retry after 697.847532ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:36.118232  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.118271  155675 retry.go:31] will retry after 395.582354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.514512  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:36.568051  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.568086  155675 retry.go:31] will retry after 646.007893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.778586  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:36.832650  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.832688  155675 retry.go:31] will retry after 716.06432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.214893  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:37.268191  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.268279  155675 retry.go:31] will retry after 854.849255ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:37.413941  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:37.549248  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:37.603971  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.604014  155675 retry.go:31] will retry after 1.344807605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.124286  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:38.177165  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.177199  155675 retry.go:31] will retry after 1.263429075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.949653  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:39.003395  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:39.003428  155675 retry.go:31] will retry after 2.765859651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:39.414384  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:39.441621  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:39.494342  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:39.494371  155675 retry.go:31] will retry after 2.952922772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:41.414500  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:41.769964  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:41.823729  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:41.823776  155675 retry.go:31] will retry after 2.930479483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:42.447772  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:42.501213  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:42.501266  155675 retry.go:31] will retry after 3.721393623s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:43.414622  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:44.755175  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:44.807949  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:44.807981  155675 retry.go:31] will retry after 4.46774792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:45.913827  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:46.223306  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:46.275912  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:46.275942  155675 retry.go:31] will retry after 9.165769414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:48.413715  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:49.276318  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:49.331953  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:49.331996  155675 retry.go:31] will retry after 7.553909482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:50.913554  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:53.413799  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:55.442725  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:55.495811  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:55.495844  155675 retry.go:31] will retry after 8.398663559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:55.913916  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:56.886337  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:56.938883  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:56.938912  155675 retry.go:31] will retry after 5.941880418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:58.414176  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:00.913767  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:02.881855  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:02.913856  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:02.936281  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:02.936310  155675 retry.go:31] will retry after 8.801429272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:03.895505  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:03.949396  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:03.949425  155675 retry.go:31] will retry after 8.280385033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:04.914589  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:07.413893  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:09.414585  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:11.738357  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:11.791944  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:11.791978  155675 retry.go:31] will retry after 20.07436133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:11.913506  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:12.230962  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:12.284322  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:12.284367  155675 retry.go:31] will retry after 31.198537936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:13.913570  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:15.913975  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:18.413914  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:20.913884  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:22.914461  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:25.414237  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:27.914518  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:30.414136  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:31.867242  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:31.921723  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:31.921774  155675 retry.go:31] will retry after 19.984076529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:32.913680  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:34.914116  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:36.914541  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:39.414546  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:41.914263  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:43.484108  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:43.536861  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:43.536898  155675 retry.go:31] will retry after 27.176524941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:44.413860  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:46.414476  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:48.914309  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:51.414076  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:51.906696  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:51.960820  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:51.960952  155675 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 21:29:53.414245  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:55.913983  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:58.413904  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:00.913802  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:02.914585  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:05.414592  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:07.914259  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:10.413676  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:30:10.714113  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:30:10.768467  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:30:10.768623  155675 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 21:30:10.771151  155675 out.go:179] * Enabled addons: 
	I1002 21:30:10.772416  155675 addons.go:514] duration metric: took 1m35.489723071s for enable addons: enabled=[]
	W1002 21:30:12.413723  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:14.414457  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:16.913965  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:19.413730  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:21.414406  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:23.913870  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:26.413629  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:28.414046  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:30.414474  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:32.914093  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:35.414296  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:37.914285  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:39.914538  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:42.413582  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:44.413882  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:46.414229  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:48.913587  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:50.914483  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:53.413612  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:55.413685  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:57.414468  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:59.913623  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:02.414537  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:04.913937  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:06.914435  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:09.414047  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:11.913920  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:13.914248  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:15.914508  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:18.413878  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:20.913663  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:23.413996  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:25.414227  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:27.414386  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:29.414601  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:31.913548  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:33.913846  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:35.913989  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:38.414223  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:40.414407  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:42.914396  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:45.413639  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:47.913627  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:49.913793  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:52.413722  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:54.414032  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:56.414437  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:58.913898  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:01.413677  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:03.413857  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:05.414152  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:07.414277  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:09.414527  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:11.914491  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:14.413681  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:16.413854  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:18.414029  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:20.913949  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:22.914491  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:25.413701  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:27.414620  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:29.914027  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:32.414041  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:34.414502  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:36.914551  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:39.413809  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:41.913725  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:43.913943  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:45.914242  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:47.914422  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:50.413682  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:52.913674  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:54.913997  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:56.914580  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:59.413963  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:01.414035  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:03.414188  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:05.913578  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:07.913616  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:09.913947  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:12.413832  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:14.413971  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:16.414484  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:18.913973  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:21.413936  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:23.414140  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:25.414411  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:27.913573  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:29.913817  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:32.413645  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:34.413963  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:36.414473  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:38.913857  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:41.413732  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:43.413888  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:45.913712  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:48.413850  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:50.913725  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:53.413931  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:55.414296  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:57.414522  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:59.913776  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:02.413563  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:04.413718  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:06.414028  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:08.414119  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:10.914009  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:13.414193  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:15.414496  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:17.913661  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:19.913874  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:22.413686  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:24.413997  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:26.414507  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:28.913912  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:31.414590  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:33.913730  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:34:35.413657  155675 node_ready.go:38] duration metric: took 6m0.000618353s for node "ha-798711" to be "Ready" ...
	I1002 21:34:35.416036  155675 out.go:203] 
	W1002 21:34:35.417586  155675 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 21:34:35.417604  155675 out.go:285] * 
	W1002 21:34:35.419340  155675 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:34:35.420515  155675 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:34:24 ha-798711 crio[519]: time="2025-10-02T21:34:24.720667796Z" level=info msg="createCtr: deleting container fed7957e391d22ff1b00c20bf39a2629000d28f6ef8e95fd7a1cc105294d4cf9 from storage" id=aaa985b0-fc22-414c-b675-b9f570799621 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:24 ha-798711 crio[519]: time="2025-10-02T21:34:24.722375917Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=4413d481-dcd8-40f8-a194-faad19686e63 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:24 ha-798711 crio[519]: time="2025-10-02T21:34:24.72271947Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-798711_kube-system_4a40991d7a1715abba4b4bde50171ddc_0" id=aaa985b0-fc22-414c-b675-b9f570799621 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.692896144Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e092e879-fb2b-4560-a09a-806f8c083612 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.693827784Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=07d8f06d-5d02-4f43-8d50-922b2fad57f8 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.694857175Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-798711/kube-controller-manager" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.695091123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.698533022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.698951966Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.713793668Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.715185541Z" level=info msg="createCtr: deleting container ID 1f61dc05309357d6d95e8d08d0ee556024b814a437126f0a540e3a1c3084ef48 from idIndex" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.71522329Z" level=info msg="createCtr: removing container 1f61dc05309357d6d95e8d08d0ee556024b814a437126f0a540e3a1c3084ef48" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.715264586Z" level=info msg="createCtr: deleting container 1f61dc05309357d6d95e8d08d0ee556024b814a437126f0a540e3a1c3084ef48 from storage" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.717552516Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.692941794Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3f439862-6d29-437c-85d6-7d524d8b447f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.693856663Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=49ad3c8f-3a68-4392-9a21-40f72e2ac9f9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.694777439Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.694993707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.6985454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.698958295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.717136088Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.718679137Z" level=info msg="createCtr: deleting container ID 4cb3048da1bad080ed093015bbfd619d7bdbdf72d7cbe53a62b050a2459faeb3 from idIndex" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.718727129Z" level=info msg="createCtr: removing container 4cb3048da1bad080ed093015bbfd619d7bdbdf72d7cbe53a62b050a2459faeb3" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.718795592Z" level=info msg="createCtr: deleting container 4cb3048da1bad080ed093015bbfd619d7bdbdf72d7cbe53a62b050a2459faeb3 from storage" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.721164251Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:34:37.949674    2171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:37.950216    2171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:37.951789    2171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:37.952208    2171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:37.953710    2171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:34:37 up  3:16,  0 user,  load average: 0.15, 0.10, 0.09
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.692416     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.718016     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:34:27 ha-798711 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:27 ha-798711 kubelet[669]:  > podSandboxID="26c7d26dc814a6069dd754062dbc6b80b5e77155b8bcfd144b82a577d7aa24f0"
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.718124     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:34:27 ha-798711 kubelet[669]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:27 ha-798711 kubelet[669]:  > logger="UnhandledError"
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.718154     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.692460     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.721530     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:34:28 ha-798711 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:28 ha-798711 kubelet[669]:  > podSandboxID="03e68d2f04bf8c206661aee5adee3f6f82f0584fb4c70614b572bca6f0516412"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.721638     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:34:28 ha-798711 kubelet[669]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:28 ha-798711 kubelet[669]:  > logger="UnhandledError"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.721683     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:34:30 ha-798711 kubelet[669]: E1002 21:34:30.326592     669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:34:30 ha-798711 kubelet[669]: I1002 21:34:30.497709     669 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:34:30 ha-798711 kubelet[669]: E1002 21:34:30.498117     669 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:34:34 ha-798711 kubelet[669]: E1002 21:34:34.706002     669 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-798711\" not found"
	Oct 02 21:34:36 ha-798711 kubelet[669]: E1002 21:34:36.355106     669 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-798711&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 21:34:36 ha-798711 kubelet[669]: E1002 21:34:36.361390     669 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac9d380df39a3  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:28:34.678995363 +0000 UTC m=+0.075563829,LastTimestamp:2025-10-02 21:28:34.678995363 +0000 UTC m=+0.075563829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:34:37 ha-798711 kubelet[669]: E1002 21:34:37.328268     669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:34:37 ha-798711 kubelet[669]: I1002 21:34:37.500033     669 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:34:37 ha-798711 kubelet[669]: E1002 21:34:37.500531     669 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 2 (299.572448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-798711 node add --control-plane --alsologtostderr -v 5: exit status 103 (248.768435ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-798711 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p ha-798711"

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:34:38.395071  160311 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:34:38.395328  160311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:34:38.395337  160311 out.go:374] Setting ErrFile to fd 2...
	I1002 21:34:38.395340  160311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:34:38.395534  160311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:34:38.395840  160311 mustload.go:65] Loading cluster: ha-798711
	I1002 21:34:38.396203  160311 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:34:38.396569  160311 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:34:38.413520  160311 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:34:38.413823  160311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:34:38.469563  160311 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:34:38.458772681 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:34:38.469683  160311 api_server.go:166] Checking apiserver status ...
	I1002 21:34:38.469724  160311 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 21:34:38.469842  160311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:34:38.486712  160311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	W1002 21:34:38.590972  160311 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:34:38.593060  160311 out.go:179] * The control-plane node ha-798711 apiserver is not running: (state=Stopped)
	I1002 21:34:38.594628  160311 out.go:179]   To start a cluster, run: "minikube start -p ha-798711"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-798711 node add --control-plane --alsologtostderr -v 5" : exit status 103
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 155870,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:28:28.629176332Z",
	            "FinishedAt": "2025-10-02T21:28:27.30406005Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6709695e88674e10e353a7a1e6a5f597599db0f8dff17de25e6a675a5a052e8",
	            "SandboxKey": "/var/run/docker/netns/e6709695e886",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:b8:bb:5f:71:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "d6008f1fd1a1f997c0b42aeef656e8d86f4f11d2951f29e56ff47db4f71a71ea",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 2 (292.214123ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node add --alsologtostderr -v 5                                                    │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node stop m02 --alsologtostderr -v 5                                               │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node start m02 --alsologtostderr -v 5                                              │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │ 02 Oct 25 21:22 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5                                           │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ node    │ ha-798711 node delete m03 --alsologtostderr -v 5                                             │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │ 02 Oct 25 21:28 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ node    │ ha-798711 node add --control-plane --alsologtostderr -v 5                                    │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:28:28
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:28:28.403003  155675 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:28:28.403116  155675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.403125  155675 out.go:374] Setting ErrFile to fd 2...
	I1002 21:28:28.403129  155675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.403315  155675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:28:28.403776  155675 out.go:368] Setting JSON to false
	I1002 21:28:28.404642  155675 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11449,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:28:28.404726  155675 start.go:140] virtualization: kvm guest
	I1002 21:28:28.406949  155675 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:28:28.408440  155675 notify.go:220] Checking for updates...
	I1002 21:28:28.408467  155675 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:28:28.409938  155675 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:28:28.411145  155675 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:28.412417  155675 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:28:28.413758  155675 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:28:28.415028  155675 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:28:28.416927  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:28.417596  155675 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:28:28.441148  155675 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:28:28.441315  155675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:28.496626  155675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:28:28.486980606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:28.496755  155675 docker.go:318] overlay module found
	I1002 21:28:28.498705  155675 out.go:179] * Using the docker driver based on existing profile
	I1002 21:28:28.499971  155675 start.go:304] selected driver: docker
	I1002 21:28:28.499988  155675 start.go:924] validating driver "docker" against &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:28:28.500076  155675 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:28:28.500152  155675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:28.554609  155675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:28:28.545101226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:28.555297  155675 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:28:28.555338  155675 cni.go:84] Creating CNI manager for ""
	I1002 21:28:28.555400  155675 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:28:28.555463  155675 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 21:28:28.557542  155675 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:28:28.558794  155675 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:28:28.559993  155675 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:28:28.561213  155675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:28:28.561259  155675 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:28:28.561268  155675 cache.go:58] Caching tarball of preloaded images
	I1002 21:28:28.561312  155675 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:28:28.561377  155675 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:28:28.561394  155675 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:28:28.561531  155675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:28:28.581862  155675 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:28:28.581882  155675 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:28:28.581898  155675 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:28:28.581920  155675 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:28:28.581974  155675 start.go:364] duration metric: took 36.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:28:28.581991  155675 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:28:28.581998  155675 fix.go:54] fixHost starting: 
	I1002 21:28:28.582193  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:28.600330  155675 fix.go:112] recreateIfNeeded on ha-798711: state=Stopped err=<nil>
	W1002 21:28:28.600370  155675 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:28:28.602558  155675 out.go:252] * Restarting existing docker container for "ha-798711" ...
	I1002 21:28:28.602629  155675 cli_runner.go:164] Run: docker start ha-798711
	I1002 21:28:28.838867  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:28.857507  155675 kic.go:430] container "ha-798711" state is running.
	I1002 21:28:28.857953  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:28.875695  155675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:28:28.875935  155675 machine.go:93] provisionDockerMachine start ...
	I1002 21:28:28.876007  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:28.894590  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:28.894848  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:28.894862  155675 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:28:28.895489  155675 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32860->127.0.0.1:32793: read: connection reset by peer
	I1002 21:28:32.042146  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:28:32.042175  155675 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:28:32.042247  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.060169  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.060387  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.060400  155675 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:28:32.214017  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:28:32.214104  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.232113  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.232342  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.232359  155675 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:28:32.376535  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:28:32.376566  155675 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:28:32.376584  155675 ubuntu.go:190] setting up certificates
	I1002 21:28:32.376592  155675 provision.go:84] configureAuth start
	I1002 21:28:32.376642  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:32.396020  155675 provision.go:143] copyHostCerts
	I1002 21:28:32.396062  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:28:32.396100  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:28:32.396116  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:28:32.396183  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:28:32.396277  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:28:32.396305  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:28:32.396320  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:28:32.396353  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:28:32.396398  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:28:32.396415  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:28:32.396419  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:28:32.396441  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:28:32.396489  155675 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:28:32.512217  155675 provision.go:177] copyRemoteCerts
	I1002 21:28:32.512275  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:28:32.512317  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.530566  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:32.631941  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:28:32.631999  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:28:32.649350  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:28:32.649401  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:28:32.666579  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:28:32.666640  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:28:32.684729  155675 provision.go:87] duration metric: took 308.118918ms to configureAuth
	I1002 21:28:32.684867  155675 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:28:32.685043  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:32.685148  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.703210  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.703437  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.703461  155675 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:28:32.962015  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:28:32.962052  155675 machine.go:96] duration metric: took 4.086102415s to provisionDockerMachine
	I1002 21:28:32.962066  155675 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:28:32.962081  155675 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:28:32.962161  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:28:32.962205  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.980349  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.082626  155675 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:28:33.086352  155675 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:28:33.086384  155675 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:28:33.086398  155675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:28:33.086455  155675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:28:33.086573  155675 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:28:33.086598  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:28:33.086723  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:28:33.094470  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:28:33.112480  155675 start.go:296] duration metric: took 150.396395ms for postStartSetup
	I1002 21:28:33.112566  155675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:28:33.112609  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.130086  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.230100  155675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:28:33.235048  155675 fix.go:56] duration metric: took 4.65304118s for fixHost
	I1002 21:28:33.235074  155675 start.go:83] releasing machines lock for "ha-798711", held for 4.653089722s
	I1002 21:28:33.235148  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:33.253218  155675 ssh_runner.go:195] Run: cat /version.json
	I1002 21:28:33.253241  155675 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:28:33.253280  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.253330  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.273049  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.273536  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.445879  155675 ssh_runner.go:195] Run: systemctl --version
	I1002 21:28:33.452886  155675 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:28:33.488518  155675 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:28:33.493393  155675 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:28:33.493458  155675 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:28:33.501643  155675 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:28:33.501669  155675 start.go:495] detecting cgroup driver to use...
	I1002 21:28:33.501700  155675 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:28:33.501756  155675 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:28:33.515853  155675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:28:33.528213  155675 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:28:33.528272  155675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:28:33.542828  155675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:28:33.556143  155675 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:28:33.634827  155675 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:28:33.716388  155675 docker.go:234] disabling docker service ...
	I1002 21:28:33.716495  155675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:28:33.731194  155675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:28:33.744342  155675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:28:33.823830  155675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:28:33.905576  155675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:28:33.918701  155675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:28:33.933267  155675 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:28:33.933327  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.942732  155675 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:28:33.942809  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.951932  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.961276  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.970164  155675 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:28:33.978507  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.987369  155675 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.995524  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:34.004102  155675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:28:34.011220  155675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:28:34.018342  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:34.095886  155675 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:28:34.203604  155675 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:28:34.203665  155675 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:28:34.207612  155675 start.go:563] Will wait 60s for crictl version
	I1002 21:28:34.207675  155675 ssh_runner.go:195] Run: which crictl
	I1002 21:28:34.211395  155675 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:28:34.235415  155675 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:28:34.235492  155675 ssh_runner.go:195] Run: crio --version
	I1002 21:28:34.263418  155675 ssh_runner.go:195] Run: crio --version
	I1002 21:28:34.293048  155675 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:28:34.294508  155675 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:28:34.312107  155675 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:28:34.316513  155675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:28:34.327623  155675 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:28:34.327797  155675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:28:34.327859  155675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:28:34.360824  155675 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:28:34.360849  155675 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:28:34.360901  155675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:28:34.388164  155675 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:28:34.388188  155675 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:28:34.388197  155675 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:28:34.388287  155675 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:28:34.388349  155675 ssh_runner.go:195] Run: crio config
	I1002 21:28:34.434047  155675 cni.go:84] Creating CNI manager for ""
	I1002 21:28:34.434070  155675 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:28:34.434089  155675 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:28:34.434108  155675 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:28:34.434226  155675 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:28:34.434286  155675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:28:34.442337  155675 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:28:34.442397  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:28:34.450473  155675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:28:34.462634  155675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:28:34.474595  155675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:28:34.486784  155675 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:28:34.490250  155675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:28:34.499967  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:34.576427  155675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:28:34.601305  155675 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:28:34.601329  155675 certs.go:195] generating shared ca certs ...
	I1002 21:28:34.601346  155675 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:34.601512  155675 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:28:34.601558  155675 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:28:34.601570  155675 certs.go:257] generating profile certs ...
	I1002 21:28:34.601674  155675 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:28:34.601761  155675 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b
	I1002 21:28:34.601817  155675 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:28:34.601830  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:28:34.601853  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:28:34.601878  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:28:34.601897  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:28:34.601915  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:28:34.601943  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:28:34.601963  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:28:34.601979  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:28:34.602044  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:28:34.602085  155675 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:28:34.602098  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:28:34.602132  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:28:34.602161  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:28:34.602187  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:28:34.602249  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:28:34.602291  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.602313  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.602334  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.603145  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:28:34.622533  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:28:34.642167  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:28:34.661662  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:28:34.684982  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 21:28:34.703295  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:28:34.721710  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:28:34.739228  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:28:34.756359  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:28:34.773708  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:28:34.791360  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:28:34.809607  155675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:28:34.822659  155675 ssh_runner.go:195] Run: openssl version
	I1002 21:28:34.828896  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:28:34.837462  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.841707  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.841776  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.876686  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:28:34.885143  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:28:34.893940  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.897851  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.897917  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.932255  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:28:34.940703  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:28:34.949899  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.953722  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.953783  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.989786  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:28:34.998247  155675 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:28:35.002235  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:28:35.036665  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:28:35.070968  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:28:35.106690  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:28:35.154498  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:28:35.193796  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:28:35.228071  155675 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:28:35.228163  155675 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:28:35.228246  155675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:28:35.256219  155675 cri.go:89] found id: ""
	I1002 21:28:35.256288  155675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:28:35.264604  155675 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:28:35.264627  155675 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:28:35.264674  155675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:28:35.271961  155675 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:28:35.272339  155675 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:35.272429  155675 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "ha-798711" cluster setting kubeconfig missing "ha-798711" context setting]
	I1002 21:28:35.272674  155675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.273223  155675 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:28:35.273680  155675 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:28:35.273697  155675 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:28:35.273706  155675 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:28:35.273711  155675 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:28:35.273716  155675 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:28:35.273768  155675 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 21:28:35.274106  155675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:28:35.281708  155675 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 21:28:35.281757  155675 kubeadm.go:601] duration metric: took 17.1218ms to restartPrimaryControlPlane
	I1002 21:28:35.281768  155675 kubeadm.go:402] duration metric: took 53.709514ms to StartCluster
	I1002 21:28:35.281788  155675 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.281855  155675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:35.282359  155675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.282590  155675 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:28:35.282703  155675 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:28:35.282793  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:35.282811  155675 addons.go:69] Setting storage-provisioner=true in profile "ha-798711"
	I1002 21:28:35.282831  155675 addons.go:238] Setting addon storage-provisioner=true in "ha-798711"
	I1002 21:28:35.282837  155675 addons.go:69] Setting default-storageclass=true in profile "ha-798711"
	I1002 21:28:35.282853  155675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-798711"
	I1002 21:28:35.282867  155675 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:35.283211  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.283373  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.287818  155675 out.go:179] * Verifying Kubernetes components...
	I1002 21:28:35.289179  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:35.305536  155675 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:28:35.305848  155675 addons.go:238] Setting addon default-storageclass=true in "ha-798711"
	I1002 21:28:35.305892  155675 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:35.306218  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.306573  155675 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:28:35.307769  155675 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:28:35.307789  155675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:28:35.307839  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:35.330701  155675 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:35.330727  155675 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:28:35.330911  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:35.334724  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:35.351684  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:35.399040  155675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:28:35.412985  155675 node_ready.go:35] waiting up to 6m0s for node "ha-798711" to be "Ready" ...
	I1002 21:28:35.442600  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:28:35.460605  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:35.502524  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.502566  155675 retry.go:31] will retry after 185.764836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:35.517773  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.517809  155675 retry.go:31] will retry after 133.246336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.652188  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:35.688959  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:35.715291  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.715332  155675 retry.go:31] will retry after 306.166157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:35.759518  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.759549  155675 retry.go:31] will retry after 301.391679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.022497  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:36.061160  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:36.079961  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.080007  155675 retry.go:31] will retry after 697.847532ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:36.118232  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.118271  155675 retry.go:31] will retry after 395.582354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.514512  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:36.568051  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.568086  155675 retry.go:31] will retry after 646.007893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.778586  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:36.832650  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.832688  155675 retry.go:31] will retry after 716.06432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.214893  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:37.268191  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.268279  155675 retry.go:31] will retry after 854.849255ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:37.413941  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:37.549248  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:37.603971  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.604014  155675 retry.go:31] will retry after 1.344807605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.124286  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:38.177165  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.177199  155675 retry.go:31] will retry after 1.263429075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.949653  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:39.003395  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:39.003428  155675 retry.go:31] will retry after 2.765859651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:39.414384  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:39.441621  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:39.494342  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:39.494371  155675 retry.go:31] will retry after 2.952922772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:41.414500  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:41.769964  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:41.823729  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:41.823776  155675 retry.go:31] will retry after 2.930479483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:42.447772  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:42.501213  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:42.501266  155675 retry.go:31] will retry after 3.721393623s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:43.414622  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:44.755175  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:44.807949  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:44.807981  155675 retry.go:31] will retry after 4.46774792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:45.913827  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:46.223306  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:46.275912  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:46.275942  155675 retry.go:31] will retry after 9.165769414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:48.413715  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:49.276318  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:49.331953  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:49.331996  155675 retry.go:31] will retry after 7.553909482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:50.913554  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:53.413799  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:55.442725  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:55.495811  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:55.495844  155675 retry.go:31] will retry after 8.398663559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:55.913916  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:56.886337  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:56.938883  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:56.938912  155675 retry.go:31] will retry after 5.941880418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:58.414176  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:00.913767  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:02.881855  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:02.913856  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:02.936281  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:02.936310  155675 retry.go:31] will retry after 8.801429272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:03.895505  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:03.949396  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:03.949425  155675 retry.go:31] will retry after 8.280385033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:04.914589  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:07.413893  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:09.414585  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:11.738357  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:11.791944  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:11.791978  155675 retry.go:31] will retry after 20.07436133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:11.913506  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:12.230962  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:12.284322  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:12.284367  155675 retry.go:31] will retry after 31.198537936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:13.913570  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:15.913975  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:18.413914  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:20.913884  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:22.914461  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:25.414237  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:27.914518  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:30.414136  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:31.867242  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:31.921723  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:31.921774  155675 retry.go:31] will retry after 19.984076529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:32.913680  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:34.914116  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:36.914541  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:39.414546  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:41.914263  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:43.484108  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:43.536861  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:43.536898  155675 retry.go:31] will retry after 27.176524941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:44.413860  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:46.414476  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:48.914309  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:51.414076  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:51.906696  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:51.960820  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:51.960952  155675 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 21:29:53.414245  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:55.913983  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:58.413904  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:00.913802  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:02.914585  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:05.414592  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:07.914259  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:10.413676  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:30:10.714113  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:30:10.768467  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:30:10.768623  155675 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 21:30:10.771151  155675 out.go:179] * Enabled addons: 
	I1002 21:30:10.772416  155675 addons.go:514] duration metric: took 1m35.489723071s for enable addons: enabled=[]
	W1002 21:30:12.413723  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:14.414457  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:16.913965  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:19.413730  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:21.414406  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:23.913870  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:26.413629  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:28.414046  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:30.414474  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:32.914093  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:35.414296  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:37.914285  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:39.914538  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:42.413582  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:44.413882  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:46.414229  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:48.913587  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:50.914483  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:53.413612  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:55.413685  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:57.414468  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:59.913623  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:02.414537  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:04.913937  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:06.914435  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:09.414047  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:11.913920  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:13.914248  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:15.914508  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:18.413878  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:20.913663  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:23.413996  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:25.414227  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:27.414386  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:29.414601  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:31.913548  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:33.913846  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:35.913989  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:38.414223  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:40.414407  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:42.914396  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:45.413639  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:47.913627  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:49.913793  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:52.413722  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:54.414032  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:56.414437  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:58.913898  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:01.413677  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:03.413857  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:05.414152  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:07.414277  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:09.414527  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:11.914491  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:14.413681  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:16.413854  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:18.414029  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:20.913949  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:22.914491  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:25.413701  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:27.414620  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:29.914027  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:32.414041  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:34.414502  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:36.914551  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:39.413809  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:41.913725  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:43.913943  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:45.914242  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:47.914422  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:50.413682  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:52.913674  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:54.913997  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:56.914580  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:59.413963  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:01.414035  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:03.414188  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:05.913578  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:07.913616  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:09.913947  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:12.413832  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:14.413971  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:16.414484  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:18.913973  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:21.413936  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:23.414140  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:25.414411  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:27.913573  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:29.913817  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:32.413645  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:34.413963  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:36.414473  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:38.913857  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:41.413732  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:43.413888  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:45.913712  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:48.413850  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:50.913725  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:53.413931  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:55.414296  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:57.414522  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:59.913776  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:02.413563  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:04.413718  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:06.414028  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:08.414119  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:10.914009  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:13.414193  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:15.414496  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:17.913661  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:19.913874  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:22.413686  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:24.413997  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:26.414507  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:28.913912  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:31.414590  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:33.913730  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:34:35.413657  155675 node_ready.go:38] duration metric: took 6m0.000618353s for node "ha-798711" to be "Ready" ...
	I1002 21:34:35.416036  155675 out.go:203] 
	W1002 21:34:35.417586  155675 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 21:34:35.417604  155675 out.go:285] * 
	W1002 21:34:35.419340  155675 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:34:35.420515  155675 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:34:24 ha-798711 crio[519]: time="2025-10-02T21:34:24.720667796Z" level=info msg="createCtr: deleting container fed7957e391d22ff1b00c20bf39a2629000d28f6ef8e95fd7a1cc105294d4cf9 from storage" id=aaa985b0-fc22-414c-b675-b9f570799621 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:24 ha-798711 crio[519]: time="2025-10-02T21:34:24.722375917Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=4413d481-dcd8-40f8-a194-faad19686e63 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:24 ha-798711 crio[519]: time="2025-10-02T21:34:24.72271947Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-798711_kube-system_4a40991d7a1715abba4b4bde50171ddc_0" id=aaa985b0-fc22-414c-b675-b9f570799621 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.692896144Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=e092e879-fb2b-4560-a09a-806f8c083612 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.693827784Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=07d8f06d-5d02-4f43-8d50-922b2fad57f8 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.694857175Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-798711/kube-controller-manager" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.695091123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.698533022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.698951966Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.713793668Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.715185541Z" level=info msg="createCtr: deleting container ID 1f61dc05309357d6d95e8d08d0ee556024b814a437126f0a540e3a1c3084ef48 from idIndex" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.71522329Z" level=info msg="createCtr: removing container 1f61dc05309357d6d95e8d08d0ee556024b814a437126f0a540e3a1c3084ef48" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.715264586Z" level=info msg="createCtr: deleting container 1f61dc05309357d6d95e8d08d0ee556024b814a437126f0a540e3a1c3084ef48 from storage" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:27 ha-798711 crio[519]: time="2025-10-02T21:34:27.717552516Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=ba6561f4-309b-4d7a-a3c1-bffb7b390cf4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.692941794Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=3f439862-6d29-437c-85d6-7d524d8b447f name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.693856663Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=49ad3c8f-3a68-4392-9a21-40f72e2ac9f9 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.694777439Z" level=info msg="Creating container: kube-system/etcd-ha-798711/etcd" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.694993707Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.6985454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.698958295Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.717136088Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.718679137Z" level=info msg="createCtr: deleting container ID 4cb3048da1bad080ed093015bbfd619d7bdbdf72d7cbe53a62b050a2459faeb3 from idIndex" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.718727129Z" level=info msg="createCtr: removing container 4cb3048da1bad080ed093015bbfd619d7bdbdf72d7cbe53a62b050a2459faeb3" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.718795592Z" level=info msg="createCtr: deleting container 4cb3048da1bad080ed093015bbfd619d7bdbdf72d7cbe53a62b050a2459faeb3 from storage" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:28 ha-798711 crio[519]: time="2025-10-02T21:34:28.721164251Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-ha-798711_kube-system_121d6aaf59f417ae72d1b593ab9294cb_0" id=8cf72b7b-76a0-43cf-8b8c-fa6104d48781 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:34:39.476174    2342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:39.476651    2342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:39.478253    2342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:39.478684    2342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:39.480189    2342 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:34:39 up  3:16,  0 user,  load average: 0.15, 0.10, 0.09
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.692416     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.718016     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:34:27 ha-798711 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:27 ha-798711 kubelet[669]:  > podSandboxID="26c7d26dc814a6069dd754062dbc6b80b5e77155b8bcfd144b82a577d7aa24f0"
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.718124     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:34:27 ha-798711 kubelet[669]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:27 ha-798711 kubelet[669]:  > logger="UnhandledError"
	Oct 02 21:34:27 ha-798711 kubelet[669]: E1002 21:34:27.718154     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.692460     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.721530     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:34:28 ha-798711 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:28 ha-798711 kubelet[669]:  > podSandboxID="03e68d2f04bf8c206661aee5adee3f6f82f0584fb4c70614b572bca6f0516412"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.721638     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:34:28 ha-798711 kubelet[669]:         container etcd start failed in pod etcd-ha-798711_kube-system(121d6aaf59f417ae72d1b593ab9294cb): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:28 ha-798711 kubelet[669]:  > logger="UnhandledError"
	Oct 02 21:34:28 ha-798711 kubelet[669]: E1002 21:34:28.721683     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-ha-798711" podUID="121d6aaf59f417ae72d1b593ab9294cb"
	Oct 02 21:34:30 ha-798711 kubelet[669]: E1002 21:34:30.326592     669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:34:30 ha-798711 kubelet[669]: I1002 21:34:30.497709     669 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:34:30 ha-798711 kubelet[669]: E1002 21:34:30.498117     669 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:34:34 ha-798711 kubelet[669]: E1002 21:34:34.706002     669 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-798711\" not found"
	Oct 02 21:34:36 ha-798711 kubelet[669]: E1002 21:34:36.355106     669 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-798711&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 02 21:34:36 ha-798711 kubelet[669]: E1002 21:34:36.361390     669 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.49.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-798711.186ac9d380df39a3  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-798711,UID:ha-798711,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node ha-798711 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:ha-798711,},FirstTimestamp:2025-10-02 21:28:34.678995363 +0000 UTC m=+0.075563829,LastTimestamp:2025-10-02 21:28:34.678995363 +0000 UTC m=+0.075563829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-798711,}"
	Oct 02 21:34:37 ha-798711 kubelet[669]: E1002 21:34:37.328268     669 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-798711?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:34:37 ha-798711 kubelet[669]: I1002 21:34:37.500033     669 kubelet_node_status.go:75] "Attempting to register node" node="ha-798711"
	Oct 02 21:34:37 ha-798711 kubelet[669]: E1002 21:34:37.500531     669 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 2 (300.024565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:305: expected profile "ha-798711" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-798711\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-798711\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nf
sshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-798711\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonIm
ages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:309: expected profile "ha-798711" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-798711\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-798711\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShar
esRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-798711\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\
"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --o
utput json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-798711
helpers_test.go:243: (dbg) docker inspect ha-798711:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	        "Created": "2025-10-02T21:11:12.196957126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 155870,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:28:28.629176332Z",
	            "FinishedAt": "2025-10-02T21:28:27.30406005Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/hosts",
	        "LogPath": "/var/lib/docker/containers/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6/41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6-json.log",
	        "Name": "/ha-798711",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-798711:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-798711",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41ac5ea9a79947df03b806af087136e45594199389bd17227bf3b3acbe6c07a6",
	                "LowerDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158dd6a1afdb98a1698218c60ffd82c787ab2afe057170bb77ab5f7bae30909a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-798711",
	                "Source": "/var/lib/docker/volumes/ha-798711/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-798711",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-798711",
	                "name.minikube.sigs.k8s.io": "ha-798711",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e6709695e88674e10e353a7a1e6a5f597599db0f8dff17de25e6a675a5a052e8",
	            "SandboxKey": "/var/run/docker/netns/e6709695e886",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32793"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32794"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32797"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32795"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32796"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-798711": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "32:b8:bb:5f:71:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f71aea15b04799fb3cea55e549809c41456b4f7ec3d9c83531db42f007a30769",
	                    "EndpointID": "d6008f1fd1a1f997c0b42aeef656e8d86f4f11d2951f29e56ff47db4f71a71ea",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-798711",
	                        "41ac5ea9a799"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-798711 -n ha-798711: exit status 2 (307.532703ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-798711 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                             ARGS                                             │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:19 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:20 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.io                                         │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default                                    │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- exec  -- nslookup kubernetes.default.svc.cluster.local                  │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ kubectl │ ha-798711 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node add --alsologtostderr -v 5                                                    │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node stop m02 --alsologtostderr -v 5                                               │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node start m02 --alsologtostderr -v 5                                              │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:21 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │ 02 Oct 25 21:22 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5                                           │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:22 UTC │                     │
	│ node    │ ha-798711 node list --alsologtostderr -v 5                                                   │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ node    │ ha-798711 node delete m03 --alsologtostderr -v 5                                             │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                                        │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │ 02 Oct 25 21:28 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ node    │ ha-798711 node add --control-plane --alsologtostderr -v 5                                    │ ha-798711 │ jenkins │ v1.37.0 │ 02 Oct 25 21:34 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:28:28
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:28:28.403003  155675 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:28:28.403116  155675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.403125  155675 out.go:374] Setting ErrFile to fd 2...
	I1002 21:28:28.403129  155675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:28:28.403315  155675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:28:28.403776  155675 out.go:368] Setting JSON to false
	I1002 21:28:28.404642  155675 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11449,"bootTime":1759429059,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:28:28.404726  155675 start.go:140] virtualization: kvm guest
	I1002 21:28:28.406949  155675 out.go:179] * [ha-798711] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:28:28.408440  155675 notify.go:220] Checking for updates...
	I1002 21:28:28.408467  155675 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:28:28.409938  155675 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:28:28.411145  155675 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:28.412417  155675 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:28:28.413758  155675 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:28:28.415028  155675 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:28:28.416927  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:28.417596  155675 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:28:28.441148  155675 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:28:28.441315  155675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:28.496626  155675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:28:28.486980606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:28.496755  155675 docker.go:318] overlay module found
	I1002 21:28:28.498705  155675 out.go:179] * Using the docker driver based on existing profile
	I1002 21:28:28.499971  155675 start.go:304] selected driver: docker
	I1002 21:28:28.499988  155675 start.go:924] validating driver "docker" against &{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:28:28.500076  155675 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:28:28.500152  155675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:28:28.554609  155675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:28:28.545101226 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:28:28.555297  155675 start_flags.go:1002] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1002 21:28:28.555338  155675 cni.go:84] Creating CNI manager for ""
	I1002 21:28:28.555400  155675 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:28:28.555463  155675 start.go:348] cluster config:
	{Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GP
Us: AutoPauseInterval:1m0s}
	I1002 21:28:28.557542  155675 out.go:179] * Starting "ha-798711" primary control-plane node in "ha-798711" cluster
	I1002 21:28:28.558794  155675 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:28:28.559993  155675 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:28:28.561213  155675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:28:28.561259  155675 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:28:28.561268  155675 cache.go:58] Caching tarball of preloaded images
	I1002 21:28:28.561312  155675 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:28:28.561377  155675 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:28:28.561394  155675 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:28:28.561531  155675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:28:28.581862  155675 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:28:28.581882  155675 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:28:28.581898  155675 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:28:28.581920  155675 start.go:360] acquireMachinesLock for ha-798711: {Name:mkde43077785b64bbfb5ce93a22f7d6ca9fe7c07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:28:28.581974  155675 start.go:364] duration metric: took 36.029µs to acquireMachinesLock for "ha-798711"
	I1002 21:28:28.581991  155675 start.go:96] Skipping create...Using existing machine configuration
	I1002 21:28:28.581998  155675 fix.go:54] fixHost starting: 
	I1002 21:28:28.582193  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:28.600330  155675 fix.go:112] recreateIfNeeded on ha-798711: state=Stopped err=<nil>
	W1002 21:28:28.600370  155675 fix.go:138] unexpected machine state, will restart: <nil>
	I1002 21:28:28.602558  155675 out.go:252] * Restarting existing docker container for "ha-798711" ...
	I1002 21:28:28.602629  155675 cli_runner.go:164] Run: docker start ha-798711
	I1002 21:28:28.838867  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:28.857507  155675 kic.go:430] container "ha-798711" state is running.
	I1002 21:28:28.857953  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:28.875695  155675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/config.json ...
	I1002 21:28:28.875935  155675 machine.go:93] provisionDockerMachine start ...
	I1002 21:28:28.876007  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:28.894590  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:28.894848  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:28.894862  155675 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:28:28.895489  155675 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32860->127.0.0.1:32793: read: connection reset by peer
	I1002 21:28:32.042146  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:28:32.042175  155675 ubuntu.go:182] provisioning hostname "ha-798711"
	I1002 21:28:32.042247  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.060169  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.060387  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.060400  155675 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-798711 && echo "ha-798711" | sudo tee /etc/hostname
	I1002 21:28:32.214017  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-798711
	
	I1002 21:28:32.214104  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.232113  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.232342  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.232359  155675 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-798711' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-798711/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-798711' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:28:32.376535  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:28:32.376566  155675 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:28:32.376584  155675 ubuntu.go:190] setting up certificates
	I1002 21:28:32.376592  155675 provision.go:84] configureAuth start
	I1002 21:28:32.376642  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:32.396020  155675 provision.go:143] copyHostCerts
	I1002 21:28:32.396062  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:28:32.396100  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:28:32.396116  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:28:32.396183  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:28:32.396277  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:28:32.396305  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:28:32.396320  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:28:32.396353  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:28:32.396398  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:28:32.396415  155675 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:28:32.396419  155675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:28:32.396441  155675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:28:32.396489  155675 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.ha-798711 san=[127.0.0.1 192.168.49.2 ha-798711 localhost minikube]
	I1002 21:28:32.512217  155675 provision.go:177] copyRemoteCerts
	I1002 21:28:32.512275  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:28:32.512317  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.530566  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:32.631941  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1002 21:28:32.631999  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:28:32.649350  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1002 21:28:32.649401  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1002 21:28:32.666579  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1002 21:28:32.666640  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 21:28:32.684729  155675 provision.go:87] duration metric: took 308.118918ms to configureAuth
	I1002 21:28:32.684867  155675 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:28:32.685043  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:32.685148  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.703210  155675 main.go:141] libmachine: Using SSH client type: native
	I1002 21:28:32.703437  155675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I1002 21:28:32.703461  155675 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:28:32.962015  155675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:28:32.962052  155675 machine.go:96] duration metric: took 4.086102415s to provisionDockerMachine
	I1002 21:28:32.962066  155675 start.go:293] postStartSetup for "ha-798711" (driver="docker")
	I1002 21:28:32.962081  155675 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:28:32.962161  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:28:32.962205  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:32.980349  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.082626  155675 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:28:33.086352  155675 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:28:33.086384  155675 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:28:33.086398  155675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:28:33.086455  155675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:28:33.086573  155675 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:28:33.086598  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /etc/ssl/certs/841002.pem
	I1002 21:28:33.086723  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:28:33.094470  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:28:33.112480  155675 start.go:296] duration metric: took 150.396395ms for postStartSetup
	I1002 21:28:33.112566  155675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:28:33.112609  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.130086  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.230100  155675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:28:33.235048  155675 fix.go:56] duration metric: took 4.65304118s for fixHost
	I1002 21:28:33.235074  155675 start.go:83] releasing machines lock for "ha-798711", held for 4.653089722s
	I1002 21:28:33.235148  155675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-798711
	I1002 21:28:33.253218  155675 ssh_runner.go:195] Run: cat /version.json
	I1002 21:28:33.253241  155675 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:28:33.253280  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.253330  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:33.273049  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.273536  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:33.445879  155675 ssh_runner.go:195] Run: systemctl --version
	I1002 21:28:33.452886  155675 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:28:33.488518  155675 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:28:33.493393  155675 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:28:33.493458  155675 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:28:33.501643  155675 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1002 21:28:33.501669  155675 start.go:495] detecting cgroup driver to use...
	I1002 21:28:33.501700  155675 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:28:33.501756  155675 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:28:33.515853  155675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:28:33.528213  155675 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:28:33.528272  155675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:28:33.542828  155675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:28:33.556143  155675 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:28:33.634827  155675 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:28:33.716388  155675 docker.go:234] disabling docker service ...
	I1002 21:28:33.716495  155675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:28:33.731194  155675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:28:33.744342  155675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:28:33.823830  155675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:28:33.905576  155675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:28:33.918701  155675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:28:33.933267  155675 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:28:33.933327  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.942732  155675 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:28:33.942809  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.951932  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.961276  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.970164  155675 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:28:33.978507  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.987369  155675 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:33.995524  155675 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:28:34.004102  155675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:28:34.011220  155675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:28:34.018342  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:34.095886  155675 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:28:34.203604  155675 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:28:34.203665  155675 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:28:34.207612  155675 start.go:563] Will wait 60s for crictl version
	I1002 21:28:34.207675  155675 ssh_runner.go:195] Run: which crictl
	I1002 21:28:34.211395  155675 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:28:34.235415  155675 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:28:34.235492  155675 ssh_runner.go:195] Run: crio --version
	I1002 21:28:34.263418  155675 ssh_runner.go:195] Run: crio --version
	I1002 21:28:34.293048  155675 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:28:34.294508  155675 cli_runner.go:164] Run: docker network inspect ha-798711 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:28:34.312107  155675 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1002 21:28:34.316513  155675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:28:34.327623  155675 kubeadm.go:883] updating cluster {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1002 21:28:34.327797  155675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:28:34.327859  155675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:28:34.360824  155675 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:28:34.360849  155675 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:28:34.360901  155675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:28:34.388164  155675 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:28:34.388188  155675 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:28:34.388197  155675 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1002 21:28:34.388287  155675 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-798711 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:28:34.388349  155675 ssh_runner.go:195] Run: crio config
	I1002 21:28:34.434047  155675 cni.go:84] Creating CNI manager for ""
	I1002 21:28:34.434070  155675 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1002 21:28:34.434089  155675 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:28:34.434108  155675 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-798711 NodeName:ha-798711 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:28:34.434226  155675 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-798711"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:28:34.434286  155675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:28:34.442337  155675 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:28:34.442397  155675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:28:34.450473  155675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1002 21:28:34.462634  155675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:28:34.474595  155675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I1002 21:28:34.486784  155675 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:28:34.490250  155675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:28:34.499967  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:34.576427  155675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:28:34.601305  155675 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711 for IP: 192.168.49.2
	I1002 21:28:34.601329  155675 certs.go:195] generating shared ca certs ...
	I1002 21:28:34.601346  155675 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:34.601512  155675 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:28:34.601558  155675 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:28:34.601570  155675 certs.go:257] generating profile certs ...
	I1002 21:28:34.601674  155675 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key
	I1002 21:28:34.601761  155675 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key.591e0d3b
	I1002 21:28:34.601817  155675 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key
	I1002 21:28:34.601830  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1002 21:28:34.601853  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1002 21:28:34.601878  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1002 21:28:34.601897  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1002 21:28:34.601915  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1002 21:28:34.601943  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1002 21:28:34.601963  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1002 21:28:34.601979  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1002 21:28:34.602044  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:28:34.602085  155675 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:28:34.602098  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:28:34.602132  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:28:34.602161  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:28:34.602187  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:28:34.602249  155675 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:28:34.602291  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.602313  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.602334  155675 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem -> /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.603145  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:28:34.622533  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:28:34.642167  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:28:34.661662  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:28:34.684982  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1002 21:28:34.703295  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:28:34.721710  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:28:34.739228  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1671 bytes)
	I1002 21:28:34.756359  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:28:34.773708  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:28:34.791360  155675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:28:34.809607  155675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:28:34.822659  155675 ssh_runner.go:195] Run: openssl version
	I1002 21:28:34.828896  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:28:34.837462  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.841707  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.841776  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:28:34.876686  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:28:34.885143  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:28:34.893940  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.897851  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.897917  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:28:34.932255  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:28:34.940703  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:28:34.949899  155675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.953722  155675 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.953783  155675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:28:34.989786  155675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:28:34.998247  155675 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:28:35.002235  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 21:28:35.036665  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 21:28:35.070968  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 21:28:35.106690  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 21:28:35.154498  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 21:28:35.193796  155675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 21:28:35.228071  155675 kubeadm.go:400] StartCluster: {Name:ha-798711 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-798711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:28:35.228163  155675 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:28:35.228246  155675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:28:35.256219  155675 cri.go:89] found id: ""
	I1002 21:28:35.256288  155675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:28:35.264604  155675 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1002 21:28:35.264627  155675 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1002 21:28:35.264674  155675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 21:28:35.271961  155675 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 21:28:35.272339  155675 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-798711" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:35.272429  155675 kubeconfig.go:62] /home/jenkins/minikube-integration/21682-80114/kubeconfig needs updating (will repair): [kubeconfig missing "ha-798711" cluster setting kubeconfig missing "ha-798711" context setting]
	I1002 21:28:35.272674  155675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.273223  155675 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:28:35.273680  155675 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1002 21:28:35.273697  155675 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1002 21:28:35.273706  155675 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1002 21:28:35.273711  155675 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1002 21:28:35.273716  155675 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1002 21:28:35.273768  155675 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1002 21:28:35.274106  155675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 21:28:35.281708  155675 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1002 21:28:35.281757  155675 kubeadm.go:601] duration metric: took 17.1218ms to restartPrimaryControlPlane
	I1002 21:28:35.281768  155675 kubeadm.go:402] duration metric: took 53.709514ms to StartCluster
	I1002 21:28:35.281788  155675 settings.go:142] acquiring lock: {Name:mk553e97313ee9dbe2157c59aec3e740fe8caee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.281855  155675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:28:35.282359  155675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/kubeconfig: {Name:mk217b5f5bd58ca1fcf14c5f9c7dab0126c3f720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:28:35.282590  155675 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:28:35.282703  155675 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1002 21:28:35.282793  155675 config.go:182] Loaded profile config "ha-798711": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:28:35.282811  155675 addons.go:69] Setting storage-provisioner=true in profile "ha-798711"
	I1002 21:28:35.282831  155675 addons.go:238] Setting addon storage-provisioner=true in "ha-798711"
	I1002 21:28:35.282837  155675 addons.go:69] Setting default-storageclass=true in profile "ha-798711"
	I1002 21:28:35.282853  155675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-798711"
	I1002 21:28:35.282867  155675 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:35.283211  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.283373  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.287818  155675 out.go:179] * Verifying Kubernetes components...
	I1002 21:28:35.289179  155675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:28:35.305536  155675 kapi.go:59] client config for ha-798711: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.crt", KeyFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/profiles/ha-798711/client.key", CAFile:"/home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819b80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 21:28:35.305848  155675 addons.go:238] Setting addon default-storageclass=true in "ha-798711"
	I1002 21:28:35.305892  155675 host.go:66] Checking if "ha-798711" exists ...
	I1002 21:28:35.306218  155675 cli_runner.go:164] Run: docker container inspect ha-798711 --format={{.State.Status}}
	I1002 21:28:35.306573  155675 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 21:28:35.307769  155675 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:28:35.307789  155675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 21:28:35.307839  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:35.330701  155675 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:35.330727  155675 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 21:28:35.330911  155675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-798711
	I1002 21:28:35.334724  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:35.351684  155675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/ha-798711/id_rsa Username:docker}
	I1002 21:28:35.399040  155675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:28:35.412985  155675 node_ready.go:35] waiting up to 6m0s for node "ha-798711" to be "Ready" ...
	I1002 21:28:35.442600  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 21:28:35.460605  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:35.502524  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.502566  155675 retry.go:31] will retry after 185.764836ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:35.517773  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.517809  155675 retry.go:31] will retry after 133.246336ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.652188  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:35.688959  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:35.715291  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.715332  155675 retry.go:31] will retry after 306.166157ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:35.759518  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:35.759549  155675 retry.go:31] will retry after 301.391679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.022497  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1002 21:28:36.061160  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:36.079961  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.080007  155675 retry.go:31] will retry after 697.847532ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:36.118232  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.118271  155675 retry.go:31] will retry after 395.582354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.514512  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:36.568051  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.568086  155675 retry.go:31] will retry after 646.007893ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.778586  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:36.832650  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:36.832688  155675 retry.go:31] will retry after 716.06432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.214893  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:37.268191  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.268279  155675 retry.go:31] will retry after 854.849255ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:37.413941  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:37.549248  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:37.603971  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:37.604014  155675 retry.go:31] will retry after 1.344807605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.124286  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:38.177165  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.177199  155675 retry.go:31] will retry after 1.263429075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:38.949653  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:39.003395  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:39.003428  155675 retry.go:31] will retry after 2.765859651s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:39.414384  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:39.441621  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:39.494342  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:39.494371  155675 retry.go:31] will retry after 2.952922772s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:41.414500  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:41.769964  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:41.823729  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:41.823776  155675 retry.go:31] will retry after 2.930479483s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:42.447772  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:42.501213  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:42.501266  155675 retry.go:31] will retry after 3.721393623s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:43.414622  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:44.755175  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:44.807949  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:44.807981  155675 retry.go:31] will retry after 4.46774792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:45.913827  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:46.223306  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:46.275912  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:46.275942  155675 retry.go:31] will retry after 9.165769414s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:48.413715  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:49.276318  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:49.331953  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:49.331996  155675 retry.go:31] will retry after 7.553909482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:50.913554  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:28:53.413799  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:55.442725  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:28:55.495811  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:55.495844  155675 retry.go:31] will retry after 8.398663559s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:55.913916  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:28:56.886337  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:28:56.938883  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:28:56.938912  155675 retry.go:31] will retry after 5.941880418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:28:58.414176  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:00.913767  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:02.881855  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:02.913856  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:02.936281  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:02.936310  155675 retry.go:31] will retry after 8.801429272s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:03.895505  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:03.949396  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:03.949425  155675 retry.go:31] will retry after 8.280385033s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:04.914589  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:07.413893  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:09.414585  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:11.738357  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:11.791944  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:11.791978  155675 retry.go:31] will retry after 20.07436133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:11.913506  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:12.230962  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:12.284322  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:12.284367  155675 retry.go:31] will retry after 31.198537936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:13.913570  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:15.913975  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:18.413914  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:20.913884  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:22.914461  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:25.414237  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:27.914518  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:30.414136  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:31.867242  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:31.921723  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:31.921774  155675 retry.go:31] will retry after 19.984076529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:32.913680  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:34.914116  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:36.914541  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:39.414546  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:41.914263  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:43.484108  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:29:43.536861  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1002 21:29:43.536898  155675 retry.go:31] will retry after 27.176524941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:44.413860  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:46.414476  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:48.914309  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:51.414076  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:29:51.906696  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1002 21:29:51.960820  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:29:51.960952  155675 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1002 21:29:53.414245  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:55.913983  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:29:58.413904  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:00.913802  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:02.914585  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:05.414592  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:07.914259  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:10.413676  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:30:10.714113  155675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1002 21:30:10.768467  155675 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1002 21:30:10.768623  155675 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1002 21:30:10.771151  155675 out.go:179] * Enabled addons: 
	I1002 21:30:10.772416  155675 addons.go:514] duration metric: took 1m35.489723071s for enable addons: enabled=[]
	W1002 21:30:12.413723  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:14.414457  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:16.913965  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:19.413730  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:21.414406  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:23.913870  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:26.413629  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:28.414046  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:30.414474  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:32.914093  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:35.414296  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:37.914285  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:39.914538  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:42.413582  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:44.413882  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:46.414229  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:48.913587  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:50.914483  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:53.413612  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:55.413685  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:57.414468  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:30:59.913623  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:02.414537  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:04.913937  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:06.914435  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:09.414047  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:11.913920  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:13.914248  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:15.914508  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:18.413878  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:20.913663  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:23.413996  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:25.414227  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:27.414386  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:29.414601  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:31.913548  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:33.913846  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:35.913989  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:38.414223  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:40.414407  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:42.914396  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:45.413639  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:47.913627  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:49.913793  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:52.413722  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:54.414032  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:56.414437  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:31:58.913898  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:01.413677  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:03.413857  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:05.414152  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:07.414277  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:09.414527  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:11.914491  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:14.413681  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:16.413854  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:18.414029  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:20.913949  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:22.914491  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:25.413701  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:27.414620  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:29.914027  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:32.414041  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:34.414502  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:36.914551  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:39.413809  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:41.913725  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:43.913943  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:45.914242  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:47.914422  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:50.413682  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:52.913674  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:54.913997  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:56.914580  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:32:59.413963  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:01.414035  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:03.414188  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:05.913578  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:07.913616  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:09.913947  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:12.413832  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:14.413971  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:16.414484  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:18.913973  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:21.413936  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:23.414140  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:25.414411  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:27.913573  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:29.913817  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:32.413645  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:34.413963  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:36.414473  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:38.913857  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:41.413732  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:43.413888  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:45.913712  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:48.413850  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:50.913725  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:53.413931  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:55.414296  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:57.414522  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:33:59.913776  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:02.413563  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:04.413718  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:06.414028  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:08.414119  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:10.914009  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:13.414193  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:15.414496  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:17.913661  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:19.913874  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:22.413686  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:24.413997  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:26.414507  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:28.913912  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:31.414590  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	W1002 21:34:33.913730  155675 node_ready.go:55] error getting node "ha-798711" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-798711": dial tcp 192.168.49.2:8443: connect: connection refused
	I1002 21:34:35.413657  155675 node_ready.go:38] duration metric: took 6m0.000618353s for node "ha-798711" to be "Ready" ...
	I1002 21:34:35.416036  155675 out.go:203] 
	W1002 21:34:35.417586  155675 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1002 21:34:35.417604  155675 out.go:285] * 
	W1002 21:34:35.419340  155675 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:34:35.420515  155675 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.700676584Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.701246775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.702148972Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.702726767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.727603447Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=e0325745-eb25-4185-8dfa-80fc32f81049 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.728957337Z" level=info msg="createCtr: deleting container ID 57463ea38c0be7bf230c5d98a5a9d80624549c4ca6a25b4e8ceb997e894ba321 from idIndex" id=e0325745-eb25-4185-8dfa-80fc32f81049 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.728993398Z" level=info msg="createCtr: removing container 57463ea38c0be7bf230c5d98a5a9d80624549c4ca6a25b4e8ceb997e894ba321" id=e0325745-eb25-4185-8dfa-80fc32f81049 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.729032403Z" level=info msg="createCtr: deleting container 57463ea38c0be7bf230c5d98a5a9d80624549c4ca6a25b4e8ceb997e894ba321 from storage" id=e0325745-eb25-4185-8dfa-80fc32f81049 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.729234713Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0498611b-fb79-4393-b359-6fa8049ad2ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.730618452Z" level=info msg="createCtr: deleting container ID 33000d787ef89d401461be75aa37ee795b952dbf3b285fe82de1a40ddd6bc411 from idIndex" id=0498611b-fb79-4393-b359-6fa8049ad2ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.730652946Z" level=info msg="createCtr: removing container 33000d787ef89d401461be75aa37ee795b952dbf3b285fe82de1a40ddd6bc411" id=0498611b-fb79-4393-b359-6fa8049ad2ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.730681295Z" level=info msg="createCtr: deleting container 33000d787ef89d401461be75aa37ee795b952dbf3b285fe82de1a40ddd6bc411 from storage" id=0498611b-fb79-4393-b359-6fa8049ad2ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.73249502Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-ha-798711_kube-system_99959991b914cf8813c444c7d7c77a99_0" id=e0325745-eb25-4185-8dfa-80fc32f81049 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:39 ha-798711 crio[519]: time="2025-10-02T21:34:39.732717592Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-ha-798711_kube-system_97bad4ae8cc2ed35ff99f173b6df4a90_0" id=0498611b-fb79-4393-b359-6fa8049ad2ab name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:40 ha-798711 crio[519]: time="2025-10-02T21:34:40.692528628Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=87a8601b-c90d-4232-b139-255819f6733c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:40 ha-798711 crio[519]: time="2025-10-02T21:34:40.693644724Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=24cf32ef-e09e-4d84-8516-309cbf79b99c name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:34:40 ha-798711 crio[519]: time="2025-10-02T21:34:40.694824978Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-798711/kube-apiserver" id=f27755ab-107c-49ca-844d-85b72d363b8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:40 ha-798711 crio[519]: time="2025-10-02T21:34:40.695117486Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:40 ha-798711 crio[519]: time="2025-10-02T21:34:40.699068052Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:40 ha-798711 crio[519]: time="2025-10-02T21:34:40.699582149Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:34:40 ha-798711 crio[519]: time="2025-10-02T21:34:40.716952099Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=f27755ab-107c-49ca-844d-85b72d363b8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:40 ha-798711 crio[519]: time="2025-10-02T21:34:40.718432205Z" level=info msg="createCtr: deleting container ID 8f9ae72e1ecc17077256ccda74230eccbc9af1b4c6af60c3274e7137dfec1bbc from idIndex" id=f27755ab-107c-49ca-844d-85b72d363b8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:40 ha-798711 crio[519]: time="2025-10-02T21:34:40.718470046Z" level=info msg="createCtr: removing container 8f9ae72e1ecc17077256ccda74230eccbc9af1b4c6af60c3274e7137dfec1bbc" id=f27755ab-107c-49ca-844d-85b72d363b8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:40 ha-798711 crio[519]: time="2025-10-02T21:34:40.718503108Z" level=info msg="createCtr: deleting container 8f9ae72e1ecc17077256ccda74230eccbc9af1b4c6af60c3274e7137dfec1bbc from storage" id=f27755ab-107c-49ca-844d-85b72d363b8c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:34:40 ha-798711 crio[519]: time="2025-10-02T21:34:40.720704527Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-ha-798711_kube-system_4a40991d7a1715abba4b4bde50171ddc_0" id=f27755ab-107c-49ca-844d-85b72d363b8c name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:34:41.099051    2531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:41.099620    2531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:41.101208    2531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:41.101621    2531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:34:41.102875    2531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001879] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:34:41 up  3:17,  0 user,  load average: 0.15, 0.10, 0.09
	Linux ha-798711 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:34:37 ha-798711 kubelet[669]: E1002 21:34:37.500531     669 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-798711"
	Oct 02 21:34:39 ha-798711 kubelet[669]: E1002 21:34:39.692342     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:34:39 ha-798711 kubelet[669]: E1002 21:34:39.692486     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:34:39 ha-798711 kubelet[669]: E1002 21:34:39.732859     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:34:39 ha-798711 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:39 ha-798711 kubelet[669]:  > podSandboxID="8e26595368e6d3f50fc055074c5b2013d59859ad2fa97b5cfb2a41a371b1f457"
	Oct 02 21:34:39 ha-798711 kubelet[669]: E1002 21:34:39.732984     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:34:39 ha-798711 kubelet[669]:         container kube-scheduler start failed in pod kube-scheduler-ha-798711_kube-system(99959991b914cf8813c444c7d7c77a99): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:39 ha-798711 kubelet[669]:  > logger="UnhandledError"
	Oct 02 21:34:39 ha-798711 kubelet[669]: E1002 21:34:39.733003     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:34:39 ha-798711 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:39 ha-798711 kubelet[669]:  > podSandboxID="26c7d26dc814a6069dd754062dbc6b80b5e77155b8bcfd144b82a577d7aa24f0"
	Oct 02 21:34:39 ha-798711 kubelet[669]: E1002 21:34:39.733028     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-ha-798711" podUID="99959991b914cf8813c444c7d7c77a99"
	Oct 02 21:34:39 ha-798711 kubelet[669]: E1002 21:34:39.733074     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:34:39 ha-798711 kubelet[669]:         container kube-controller-manager start failed in pod kube-controller-manager-ha-798711_kube-system(97bad4ae8cc2ed35ff99f173b6df4a90): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:39 ha-798711 kubelet[669]:  > logger="UnhandledError"
	Oct 02 21:34:39 ha-798711 kubelet[669]: E1002 21:34:39.734120     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-ha-798711" podUID="97bad4ae8cc2ed35ff99f173b6df4a90"
	Oct 02 21:34:40 ha-798711 kubelet[669]: E1002 21:34:40.692078     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-798711\" not found" node="ha-798711"
	Oct 02 21:34:40 ha-798711 kubelet[669]: E1002 21:34:40.721041     669 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:34:40 ha-798711 kubelet[669]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:40 ha-798711 kubelet[669]:  > podSandboxID="7cdd8b4af29704086c206b32c67cd4ae2c4228b7e1ecd3f646369e923f879ed2"
	Oct 02 21:34:40 ha-798711 kubelet[669]: E1002 21:34:40.721159     669 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:34:40 ha-798711 kubelet[669]:         container kube-apiserver start failed in pod kube-apiserver-ha-798711_kube-system(4a40991d7a1715abba4b4bde50171ddc): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:34:40 ha-798711 kubelet[669]:  > logger="UnhandledError"
	Oct 02 21:34:40 ha-798711 kubelet[669]: E1002 21:34:40.721198     669 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-ha-798711" podUID="4a40991d7a1715abba4b4bde50171ddc"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-798711 -n ha-798711: exit status 2 (304.115448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-798711" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (500.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-018093 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1002 21:37:02.783686   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:42:02.783532   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-018093 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: exit status 80 (8m20.599202234s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"467045bb-2114-4e4f-b166-6518c797d972","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-018093] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"12078b82-c2b7-4e37-8587-e9e6878b0dc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21682"}}
	{"specversion":"1.0","id":"e7faab7c-3b43-4124-953c-79327269b4ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"05fb4f9d-c483-4d20-a851-61972de00206","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig"}}
	{"specversion":"1.0","id":"f3b519ec-c6df-4fd4-bd4f-551f588c42d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube"}}
	{"specversion":"1.0","id":"3055185a-83a6-41b2-b71f-4be668eb904c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7aceb438-1a48-47ba-87c9-aadfe635265d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b171a085-5933-43d2-9ec6-e002f99078e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b40fa4c2-af27-4c7f-81ef-5f7dfe170aed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"460609eb-9645-4caf-bdd8-65e9dc2877c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-018093\" primary control-plane node in \"json-output-018093\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"00f58962-fcee-4b89-8c60-ad7e0e13b74a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ceda7504-00b5-45aa-8c13-907e56539a9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ec500ac-17c2-4e65-9f02-ccd04712661d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"11","message":"Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...","name":"Preparing Kubernetes","totalsteps":"19"}}
	{"specversion":"1.0","id":"95926043-d3a2-406f-a1c0-9d77b40ac81e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9df8d78-b3bd-4177-928e-d2ece1770c77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"3dc0b07b-d0cd-4669-bc54-26468a1f0844","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Pri
nting the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\
n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[certs] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-018093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-018093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writi
ng \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the ku
belet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001790237s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000220055s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000333951s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000382592s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using
your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check
failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"0106cef8-1528-4d16-98c5-4ff1e9d21742","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"12","message":"Generating certificates and keys ...","name":"Generating certificates","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1035366-36b2-4fe9-8931-6347fed50efc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"13","message":"Booting up control plane ...","name":"Booting control plane","totalsteps":"19"}}
	{"specversion":"1.0","id":"092b4fc3-db63-46f5-b780-16ae1d0a71ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the outpu
t from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using
existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[
etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/health
z. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.915482ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000184836s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000274688s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000491105s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v p
ause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:102
57/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"}}
	{"specversion":"1.0","id":"dca2e44e-6297-4131-9cb9-7f7fa3ea51a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system v
erification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/va
r/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Using existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy ku
belet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.915482ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000184836s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000274688s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000491105s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/c
rio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager
check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher","name":"GUEST_START","url":""}}
	{"specversion":"1.0","id":"b825914a-ca07-4735-a365-cb26e7d96aff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 start -p json-output-018093 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio": exit status 80
--- FAIL: TestJSONOutput/start/Command (500.60s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 12 has already been assigned to another step:
Generating certificates and keys ...
Cannot use for:
Generating certificates and keys ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 467045bb-2114-4e4f-b166-6518c797d972
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-018093] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 12078b82-c2b7-4e37-8587-e9e6878b0dc7
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21682"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e7faab7c-3b43-4124-953c-79327269b4ef
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 05fb4f9d-c483-4d20-a851-61972de00206
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f3b519ec-c6df-4fd4-bd4f-551f588c42d2
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3055185a-83a6-41b2-b71f-4be668eb904c
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7aceb438-1a48-47ba-87c9-aadfe635265d
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b171a085-5933-43d2-9ec6-e002f99078e4
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b40fa4c2-af27-4c7f-81ef-5f7dfe170aed
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 460609eb-9645-4caf-bdd8-65e9dc2877c7
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-018093\" primary control-plane node in \"json-output-018093\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 00f58962-fcee-4b89-8c60-ad7e0e13b74a
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759382731-21643 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ceda7504-00b5-45aa-8c13-907e56539a9c
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7ec500ac-17c2-4e65-9f02-ccd04712661d
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 95926043-d3a2-406f-a1c0-9d77b40ac81e
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: d9df8d78-b3bd-4177-928e-d2ece1770c77
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 3dc0b07b-d0cd-4669-bc54-26468a1f0844
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-018093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-018093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001790237s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000220055s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000333951s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000382592s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.4
9.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0106cef8-1528-4d16-98c5-4ff1e9d21742
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: d1035366-36b2-4fe9-8931-6347fed50efc
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 092b4fc3-db63-46f5-b780-16ae1d0a71ba
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.915482ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000184836s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000274688s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000491105s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WA
RNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: dca2e44e-6297-4131-9cb9-7f7fa3ea51a9
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.915482ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000184836s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000274688s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000491105s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARN
ING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: b825914a-ca07-4735-a365-cb26e7d96aff
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 467045bb-2114-4e4f-b166-6518c797d972
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-018093] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 12078b82-c2b7-4e37-8587-e9e6878b0dc7
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=21682"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e7faab7c-3b43-4124-953c-79327269b4ef
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 05fb4f9d-c483-4d20-a851-61972de00206
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: f3b519ec-c6df-4fd4-bd4f-551f588c42d2
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 3055185a-83a6-41b2-b71f-4be668eb904c
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-linux-amd64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 7aceb438-1a48-47ba-87c9-aadfe635265d
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: b171a085-5933-43d2-9ec6-e002f99078e4
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the docker driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: b40fa4c2-af27-4c7f-81ef-5f7dfe170aed
datacontenttype: application/json
Data,
{
"message": "Using Docker driver with root privileges"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 460609eb-9645-4caf-bdd8-65e9dc2877c7
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-018093\" primary control-plane node in \"json-output-018093\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 00f58962-fcee-4b89-8c60-ad7e0e13b74a
datacontenttype: application/json
Data,
{
"currentstep": "5",
"message": "Pulling base image v0.0.48-1759382731-21643 ...",
"name": "Pulling Base Image",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: ceda7504-00b5-45aa-8c13-907e56539a9c
datacontenttype: application/json
Data,
{
"currentstep": "8",
"message": "Creating docker container (CPUs=2, Memory=3072MB) ...",
"name": "Creating Container",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 7ec500ac-17c2-4e65-9f02-ccd04712661d
datacontenttype: application/json
Data,
{
"currentstep": "11",
"message": "Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...",
"name": "Preparing Kubernetes",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 95926043-d3a2-406f-a1c0-9d77b40ac81e
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: d9df8d78-b3bd-4177-928e-d2ece1770c77
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 3dc0b07b-d0cd-4669-bc54-26468a1f0844
datacontenttype: application/json
Data,
{
"message": "initialization failed, will try again: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGR
OUPS_CPU\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating \"apiserver-kubelet-client\" certificate and key\n[c
erts] Generating \"front-proxy-ca\" certificate and key\n[certs] Generating \"front-proxy-client\" certificate and key\n[certs] Generating \"etcd/ca\" certificate and key\n[certs] Generating \"etcd/server\" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [json-output-018093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/peer\" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [json-output-018093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating \"etcd/healthcheck-client\" certificate and key\n[certs] Generating \"apiserver-etcd-client\" certificate and key\n[certs] Generating \"sa\" key and public key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing
\"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kub
elet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 1.001790237s\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000220055s\n[control-plane-check] kube-scheduler is not healthy after 4m0.000333951s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000382592s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/cr
io.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.4
9.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 0106cef8-1528-4d16-98c5-4ff1e9d21742
datacontenttype: application/json
Data,
{
"currentstep": "12",
"message": "Generating certificates and keys ...",
"name": "Generating certificates",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: d1035366-36b2-4fe9-8931-6347fed50efc
datacontenttype: application/json
Data,
{
"currentstep": "13",
"message": "Booting up control plane ...",
"name": "Booting control plane",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 092b4fc3-db63-46f5-b780-16ae1d0a71ba
datacontenttype: application/json
Data,
{
"message": "Error starting cluster: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[
0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] U
sing existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating stati
c Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.915482ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[
control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000184836s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000274688s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000491105s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WA
RNING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: dca2e44e-6297-4131-9cb9-7f7fa3ea51a9
datacontenttype: application/json
Data,
{
"advice": "",
"exitcode": "80",
"issues": "",
"message": "failed to start node: wait: sudo /bin/bash -c \"env PATH=\"/var/lib/minikube/binaries/v1.34.1:$PATH\" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables\": Process exited with status 1\nstdout:\n[init] Using Kubernetes version: v1.34.1\n[preflight] Running pre-flight checks\n[preflight] The system verification failed. Printing the output from the verification:\n\u001b[0;37mKERNEL_VERSION\u001b[0m: \u001b[0;32m6.8.0-1041-gcp\u001b[0m\n\u001b[0;37mOS\u001b[0m: \u001b[0;32mLinux\u001b[0m\n\u001b[0;37mCGROUPS_CPU\u001b[0m
: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_CPUSET\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_DEVICES\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_FREEZER\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_MEMORY\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_PIDS\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_HUGETLB\u001b[0m: \u001b[0;32menabled\u001b[0m\n\u001b[0;37mCGROUPS_IO\u001b[0m: \u001b[0;32menabled\u001b[0m\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action beforehand using 'kubeadm config images pull'\n[certs] Using certificateDir folder \"/var/lib/minikube/certs\"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Using existing apiserver-kubelet-client certificate and key on disk\n[certs] Usi
ng existing front-proxy-ca certificate authority\n[certs] Using existing front-proxy-client certificate and key on disk\n[certs] Using existing etcd/ca certificate authority\n[certs] Using existing etcd/server certificate and key on disk\n[certs] Using existing etcd/peer certificate and key on disk\n[certs] Using existing etcd/healthcheck-client certificate and key on disk\n[certs] Using existing apiserver-etcd-client certificate and key on disk\n[certs] Using the existing \"sa\" key\n[kubeconfig] Using kubeconfig folder \"/etc/kubernetes\"\n[kubeconfig] Writing \"admin.conf\" kubeconfig file\n[kubeconfig] Writing \"super-admin.conf\" kubeconfig file\n[kubeconfig] Writing \"kubelet.conf\" kubeconfig file\n[kubeconfig] Writing \"controller-manager.conf\" kubeconfig file\n[kubeconfig] Writing \"scheduler.conf\" kubeconfig file\n[etcd] Creating static Pod manifest for local etcd in \"/etc/kubernetes/manifests\"\n[control-plane] Using manifest folder \"/etc/kubernetes/manifests\"\n[control-plane] Creating static
Pod manifest for \"kube-apiserver\"\n[control-plane] Creating static Pod manifest for \"kube-controller-manager\"\n[control-plane] Creating static Pod manifest for \"kube-scheduler\"\n[kubelet-start] Writing kubelet environment file with flags to file \"/var/lib/kubelet/kubeadm-flags.env\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/instance-config.yaml\"\n[patches] Applied patch of type \"application/strategic-merge-patch+json\" to target \"kubeletconfiguration\"\n[kubelet-start] Writing kubelet configuration to file \"/var/lib/kubelet/config.yaml\"\n[kubelet-start] Starting the kubelet\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory \"/etc/kubernetes/manifests\"\n[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s\n[kubelet-check] The kubelet is healthy after 501.915482ms\n[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s\n[co
ntrol-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez\n[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz\n[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez\n[control-plane-check] kube-scheduler is not healthy after 4m0.000184836s\n[control-plane-check] kube-apiserver is not healthy after 4m0.000274688s\n[control-plane-check] kube-controller-manager is not healthy after 4m0.000491105s\n\nA control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI.\nHere is one example how you may list all running Kubernetes containers by using crictl:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'\n\n\nstderr:\n\t[WARN
ING SystemVerification]: failed to parse kernel config: unable to load kernel module: \"configs\", output: \"modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\\n\", err: exit status 1\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get \"https://127.0.0.1:10259/livez\": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.49.2:8443/livez: Get \"https://control-plane.minikube.internal:8443/livez?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get \"https://127.0.0.1:10257/healthz\": dial tcp 127.0.0.1:10257: connect: connection refused]\nTo see the stack trace of this error execute with --v=5 or higher",
"name": "GUEST_START",
"url": ""
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: b825914a-ca07-4735-a365-cb26e7d96aff
datacontenttype: application/json
Data,
{
"message": "╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│                                                                                           │\n╰────────────────────────────────────────
───────────────────────────────────────────────────╯"
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestMinikubeProfile (501.26s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-449866 --driver=docker  --container-runtime=crio
E1002 21:47:02.783696   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 21:52:02.783328   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p first-449866 --driver=docker  --container-runtime=crio: exit status 80 (8m17.840504212s)

                                                
                                                
-- stdout --
	* [first-449866] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "first-449866" primary control-plane node in "first-449866" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-449866 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-449866 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001132175s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000644906s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000598124s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000649583s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.01597ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000115215s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000144096s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000455592s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.01597ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000115215s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000144096s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000455592s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	* 

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-linux-amd64 start -p first-449866 --driver=docker  --container-runtime=crio": exit status 80
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-02 21:53:40.336493367 +0000 UTC m=+5451.396519775
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect second-464148
helpers_test.go:239: (dbg) Non-zero exit: docker inspect second-464148: exit status 1 (31.216304ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such object: second-464148

                                                
                                                
** /stderr **
helpers_test.go:241: failed to get docker inspect: exit status 1
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p second-464148 -n second-464148
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p second-464148 -n second-464148: exit status 85 (55.695136ms)

                                                
                                                
-- stdout --
	* Profile "second-464148" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-464148"

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 85 (may be ok)
helpers_test.go:249: "second-464148" host is not running, skipping log retrieval (state="* Profile \"second-464148\" not found. Run \"minikube profile list\" to view all profiles.")
helpers_test.go:175: Cleaning up "second-464148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-464148
panic.go:636: *** TestMinikubeProfile FAILED at 2025-10-02 21:53:40.577788534 +0000 UTC m=+5451.637814930
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMinikubeProfile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMinikubeProfile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect first-449866
helpers_test.go:243: (dbg) docker inspect first-449866:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ef5254f243c5aad932895a7520e2b3433ef402a80cbe5a19db5bb81ebd84a5b",
	        "Created": "2025-10-02T21:45:27.610379834Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 188781,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-02T21:45:27.649878811Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1924ef027790684c42575d5cb3baf0720024642d9b38e3c15ae6ecb285884400",
	        "ResolvConfPath": "/var/lib/docker/containers/3ef5254f243c5aad932895a7520e2b3433ef402a80cbe5a19db5bb81ebd84a5b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ef5254f243c5aad932895a7520e2b3433ef402a80cbe5a19db5bb81ebd84a5b/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ef5254f243c5aad932895a7520e2b3433ef402a80cbe5a19db5bb81ebd84a5b/hosts",
	        "LogPath": "/var/lib/docker/containers/3ef5254f243c5aad932895a7520e2b3433ef402a80cbe5a19db5bb81ebd84a5b/3ef5254f243c5aad932895a7520e2b3433ef402a80cbe5a19db5bb81ebd84a5b-json.log",
	        "Name": "/first-449866",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "first-449866:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "first-449866",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 8388608000,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 16777216000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3ef5254f243c5aad932895a7520e2b3433ef402a80cbe5a19db5bb81ebd84a5b",
	                "LowerDir": "/var/lib/docker/overlay2/c543d329888d85bc36cde941309faced359cf3902cd26a4f0eb1860f15e48012-init/diff:/var/lib/docker/overlay2/eb188c1673eaed8826f5d17d567176d3fdd0d6a495495fcc8577cd2074fa20ab/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c543d329888d85bc36cde941309faced359cf3902cd26a4f0eb1860f15e48012/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c543d329888d85bc36cde941309faced359cf3902cd26a4f0eb1860f15e48012/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c543d329888d85bc36cde941309faced359cf3902cd26a4f0eb1860f15e48012/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "first-449866",
	                "Source": "/var/lib/docker/volumes/first-449866/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "first-449866",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "first-449866",
	                "name.minikube.sigs.k8s.io": "first-449866",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d05ce9f984c5fe51bd25616e42b8c960a492079e25fd974a39784e529a0d9168",
	            "SandboxKey": "/var/run/docker/netns/d05ce9f984c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32828"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32832"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32830"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32831"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "first-449866": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:64:e6:ab:b2:0a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1c9a6e87dcc8e10dc548448189e725cc63a553a9ee37159b2ea5ef455a1fbe2",
	                    "EndpointID": "da2af682c47960c798d01c3d8c21edf9397e5c44b04a09d52a00e809b2ea480b",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "first-449866",
	                        "3ef5254f243c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p first-449866 -n first-449866
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p first-449866 -n first-449866: exit status 6 (294.59457ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:53:40.878116  193305 status.go:458] kubeconfig endpoint: get endpoint: "first-449866" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestMinikubeProfile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMinikubeProfile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p first-449866 logs -n 25
helpers_test.go:260: TestMinikubeProfile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬──────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                          ARGS                                                           │         PROFILE          │   USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼──────────┼─────────┼─────────────────────┼─────────────────────┤
	│ node    │ ha-798711 node delete m03 --alsologtostderr -v 5                                                                        │ ha-798711                │ jenkins  │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ stop    │ ha-798711 stop --alsologtostderr -v 5                                                                                   │ ha-798711                │ jenkins  │ v1.37.0 │ 02 Oct 25 21:28 UTC │ 02 Oct 25 21:28 UTC │
	│ start   │ ha-798711 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                            │ ha-798711                │ jenkins  │ v1.37.0 │ 02 Oct 25 21:28 UTC │                     │
	│ node    │ ha-798711 node add --control-plane --alsologtostderr -v 5                                                               │ ha-798711                │ jenkins  │ v1.37.0 │ 02 Oct 25 21:34 UTC │                     │
	│ delete  │ -p ha-798711                                                                                                            │ ha-798711                │ jenkins  │ v1.37.0 │ 02 Oct 25 21:34 UTC │ 02 Oct 25 21:34 UTC │
	│ start   │ -p json-output-018093 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio │ json-output-018093       │ testUser │ v1.37.0 │ 02 Oct 25 21:34 UTC │                     │
	│ pause   │ -p json-output-018093 --output=json --user=testUser                                                                     │ json-output-018093       │ testUser │ v1.37.0 │ 02 Oct 25 21:43 UTC │ 02 Oct 25 21:43 UTC │
	│ unpause │ -p json-output-018093 --output=json --user=testUser                                                                     │ json-output-018093       │ testUser │ v1.37.0 │ 02 Oct 25 21:43 UTC │ 02 Oct 25 21:43 UTC │
	│ stop    │ -p json-output-018093 --output=json --user=testUser                                                                     │ json-output-018093       │ testUser │ v1.37.0 │ 02 Oct 25 21:43 UTC │ 02 Oct 25 21:43 UTC │
	│ delete  │ -p json-output-018093                                                                                                   │ json-output-018093       │ jenkins  │ v1.37.0 │ 02 Oct 25 21:43 UTC │ 02 Oct 25 21:43 UTC │
	│ start   │ -p json-output-error-709461 --memory=3072 --output=json --wait=true --driver=fail                                       │ json-output-error-709461 │ jenkins  │ v1.37.0 │ 02 Oct 25 21:43 UTC │                     │
	│ delete  │ -p json-output-error-709461                                                                                             │ json-output-error-709461 │ jenkins  │ v1.37.0 │ 02 Oct 25 21:43 UTC │ 02 Oct 25 21:43 UTC │
	│ start   │ -p docker-network-519978 --network=                                                                                     │ docker-network-519978    │ jenkins  │ v1.37.0 │ 02 Oct 25 21:43 UTC │ 02 Oct 25 21:43 UTC │
	│ delete  │ -p docker-network-519978                                                                                                │ docker-network-519978    │ jenkins  │ v1.37.0 │ 02 Oct 25 21:43 UTC │ 02 Oct 25 21:43 UTC │
	│ start   │ -p docker-network-325398 --network=bridge                                                                               │ docker-network-325398    │ jenkins  │ v1.37.0 │ 02 Oct 25 21:43 UTC │ 02 Oct 25 21:44 UTC │
	│ delete  │ -p docker-network-325398                                                                                                │ docker-network-325398    │ jenkins  │ v1.37.0 │ 02 Oct 25 21:44 UTC │ 02 Oct 25 21:44 UTC │
	│ start   │ -p existing-network-902416 --network=existing-network                                                                   │ existing-network-902416  │ jenkins  │ v1.37.0 │ 02 Oct 25 21:44 UTC │ 02 Oct 25 21:44 UTC │
	│ delete  │ -p existing-network-902416                                                                                              │ existing-network-902416  │ jenkins  │ v1.37.0 │ 02 Oct 25 21:44 UTC │ 02 Oct 25 21:44 UTC │
	│ start   │ -p custom-subnet-392519 --subnet=192.168.60.0/24                                                                        │ custom-subnet-392519     │ jenkins  │ v1.37.0 │ 02 Oct 25 21:44 UTC │ 02 Oct 25 21:44 UTC │
	│ delete  │ -p custom-subnet-392519                                                                                                 │ custom-subnet-392519     │ jenkins  │ v1.37.0 │ 02 Oct 25 21:44 UTC │ 02 Oct 25 21:44 UTC │
	│ start   │ -p static-ip-321900 --static-ip=192.168.200.200                                                                         │ static-ip-321900         │ jenkins  │ v1.37.0 │ 02 Oct 25 21:44 UTC │ 02 Oct 25 21:45 UTC │
	│ ip      │ static-ip-321900 ip                                                                                                     │ static-ip-321900         │ jenkins  │ v1.37.0 │ 02 Oct 25 21:45 UTC │ 02 Oct 25 21:45 UTC │
	│ delete  │ -p static-ip-321900                                                                                                     │ static-ip-321900         │ jenkins  │ v1.37.0 │ 02 Oct 25 21:45 UTC │ 02 Oct 25 21:45 UTC │
	│ start   │ -p first-449866 --driver=docker  --container-runtime=crio                                                               │ first-449866             │ jenkins  │ v1.37.0 │ 02 Oct 25 21:45 UTC │                     │
	│ delete  │ -p second-464148                                                                                                        │ second-464148            │ jenkins  │ v1.37.0 │ 02 Oct 25 21:53 UTC │ 02 Oct 25 21:53 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴──────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 21:45:22
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 21:45:22.535365  188218 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:45:22.535633  188218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:45:22.535637  188218 out.go:374] Setting ErrFile to fd 2...
	I1002 21:45:22.535640  188218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:45:22.535868  188218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:45:22.536350  188218 out.go:368] Setting JSON to false
	I1002 21:45:22.537278  188218 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":12464,"bootTime":1759429059,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:45:22.537355  188218 start.go:140] virtualization: kvm guest
	I1002 21:45:22.539644  188218 out.go:179] * [first-449866] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:45:22.541156  188218 notify.go:220] Checking for updates...
	I1002 21:45:22.541166  188218 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:45:22.542895  188218 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:45:22.544307  188218 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:45:22.545465  188218 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:45:22.546479  188218 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:45:22.547690  188218 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:45:22.549038  188218 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:45:22.574108  188218 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:45:22.574226  188218 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:45:22.627217  188218 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:45:22.61739736 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:45:22.627314  188218 docker.go:318] overlay module found
	I1002 21:45:22.629102  188218 out.go:179] * Using the docker driver based on user configuration
	I1002 21:45:22.630437  188218 start.go:304] selected driver: docker
	I1002 21:45:22.630445  188218 start.go:924] validating driver "docker" against <nil>
	I1002 21:45:22.630456  188218 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:45:22.630562  188218 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:45:22.685070  188218 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-10-02 21:45:22.675622951 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:45:22.685225  188218 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 21:45:22.685762  188218 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1002 21:45:22.685893  188218 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 21:45:22.687617  188218 out.go:179] * Using Docker driver with root privileges
	I1002 21:45:22.688619  188218 cni.go:84] Creating CNI manager for ""
	I1002 21:45:22.688668  188218 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:45:22.688676  188218 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1002 21:45:22.688732  188218 start.go:348] cluster config:
	{Name:first-449866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-449866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:45:22.690258  188218 out.go:179] * Starting "first-449866" primary control-plane node in "first-449866" cluster
	I1002 21:45:22.691438  188218 cache.go:123] Beginning downloading kic base image for docker with crio
	I1002 21:45:22.692507  188218 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1002 21:45:22.693526  188218 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:45:22.693554  188218 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1002 21:45:22.693560  188218 cache.go:58] Caching tarball of preloaded images
	I1002 21:45:22.693663  188218 preload.go:233] Found /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1002 21:45:22.693655  188218 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1002 21:45:22.693669  188218 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1002 21:45:22.694011  188218 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/config.json ...
	I1002 21:45:22.694030  188218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/config.json: {Name:mkfb37d3845d75a63195f03001717e71e9bbc4bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:22.714063  188218 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1002 21:45:22.714074  188218 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1002 21:45:22.714094  188218 cache.go:232] Successfully downloaded all kic artifacts
	I1002 21:45:22.714116  188218 start.go:360] acquireMachinesLock for first-449866: {Name:mk63f5345758d6b9f7596f965b3faf305cce50cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 21:45:22.714206  188218 start.go:364] duration metric: took 78.995µs to acquireMachinesLock for "first-449866"
	I1002 21:45:22.714224  188218 start.go:93] Provisioning new machine with config: &{Name:first-449866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-449866 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1002 21:45:22.714286  188218 start.go:125] createHost starting for "" (driver="docker")
	I1002 21:45:22.716919  188218 out.go:252] * Creating docker container (CPUs=2, Memory=8000MB) ...
	I1002 21:45:22.717109  188218 start.go:159] libmachine.API.Create for "first-449866" (driver="docker")
	I1002 21:45:22.717153  188218 client.go:168] LocalClient.Create starting
	I1002 21:45:22.717224  188218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem
	I1002 21:45:22.717251  188218 main.go:141] libmachine: Decoding PEM data...
	I1002 21:45:22.717268  188218 main.go:141] libmachine: Parsing certificate...
	I1002 21:45:22.717332  188218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem
	I1002 21:45:22.717349  188218 main.go:141] libmachine: Decoding PEM data...
	I1002 21:45:22.717356  188218 main.go:141] libmachine: Parsing certificate...
	I1002 21:45:22.717636  188218 cli_runner.go:164] Run: docker network inspect first-449866 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1002 21:45:22.734236  188218 cli_runner.go:211] docker network inspect first-449866 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1002 21:45:22.734328  188218 network_create.go:284] running [docker network inspect first-449866] to gather additional debugging logs...
	I1002 21:45:22.734344  188218 cli_runner.go:164] Run: docker network inspect first-449866
	W1002 21:45:22.751245  188218 cli_runner.go:211] docker network inspect first-449866 returned with exit code 1
	I1002 21:45:22.751267  188218 network_create.go:287] error running [docker network inspect first-449866]: docker network inspect first-449866: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network first-449866 not found
	I1002 21:45:22.751277  188218 network_create.go:289] output of [docker network inspect first-449866]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network first-449866 not found
	
	** /stderr **
	I1002 21:45:22.751361  188218 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:45:22.768872  188218 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0d675caf745c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:be:71:92:f0:cd:e5} reservation:<nil>}
	I1002 21:45:22.769247  188218 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001deb1a0}
	I1002 21:45:22.769264  188218 network_create.go:124] attempt to create docker network first-449866 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1002 21:45:22.769310  188218 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=first-449866 first-449866
	I1002 21:45:22.825958  188218 network_create.go:108] docker network first-449866 192.168.58.0/24 created
	I1002 21:45:22.825985  188218 kic.go:121] calculated static IP "192.168.58.2" for the "first-449866" container
	I1002 21:45:22.826060  188218 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1002 21:45:22.842798  188218 cli_runner.go:164] Run: docker volume create first-449866 --label name.minikube.sigs.k8s.io=first-449866 --label created_by.minikube.sigs.k8s.io=true
	I1002 21:45:22.861046  188218 oci.go:103] Successfully created a docker volume first-449866
	I1002 21:45:22.861105  188218 cli_runner.go:164] Run: docker run --rm --name first-449866-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-449866 --entrypoint /usr/bin/test -v first-449866:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1002 21:45:23.239660  188218 oci.go:107] Successfully prepared a docker volume first-449866
	I1002 21:45:23.239694  188218 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:45:23.239719  188218 kic.go:194] Starting extracting preloaded images to volume ...
	I1002 21:45:23.239796  188218 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-449866:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1002 21:45:27.542135  188218 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v first-449866:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.302281695s)
	I1002 21:45:27.542173  188218 kic.go:203] duration metric: took 4.302444645s to extract preloaded images to volume ...
	W1002 21:45:27.542290  188218 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1002 21:45:27.542326  188218 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1002 21:45:27.542373  188218 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1002 21:45:27.595314  188218 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname first-449866 --name first-449866 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=first-449866 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=first-449866 --network first-449866 --ip 192.168.58.2 --volume first-449866:/var --security-opt apparmor=unconfined --memory=8000mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1002 21:45:27.860280  188218 cli_runner.go:164] Run: docker container inspect first-449866 --format={{.State.Running}}
	I1002 21:45:27.878846  188218 cli_runner.go:164] Run: docker container inspect first-449866 --format={{.State.Status}}
	I1002 21:45:27.898259  188218 cli_runner.go:164] Run: docker exec first-449866 stat /var/lib/dpkg/alternatives/iptables
	I1002 21:45:27.946222  188218 oci.go:144] the created container "first-449866" has a running status.
	I1002 21:45:27.946243  188218 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/first-449866/id_rsa...
	I1002 21:45:28.358608  188218 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21682-80114/.minikube/machines/first-449866/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1002 21:45:28.384817  188218 cli_runner.go:164] Run: docker container inspect first-449866 --format={{.State.Status}}
	I1002 21:45:28.403136  188218 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1002 21:45:28.403147  188218 kic_runner.go:114] Args: [docker exec --privileged first-449866 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1002 21:45:28.449662  188218 cli_runner.go:164] Run: docker container inspect first-449866 --format={{.State.Status}}
	I1002 21:45:28.467565  188218 machine.go:93] provisionDockerMachine start ...
	I1002 21:45:28.467671  188218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-449866
	I1002 21:45:28.486590  188218 main.go:141] libmachine: Using SSH client type: native
	I1002 21:45:28.486875  188218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 21:45:28.486884  188218 main.go:141] libmachine: About to run SSH command:
	hostname
	I1002 21:45:28.630949  188218 main.go:141] libmachine: SSH cmd err, output: <nil>: first-449866
	
	I1002 21:45:28.630971  188218 ubuntu.go:182] provisioning hostname "first-449866"
	I1002 21:45:28.631033  188218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-449866
	I1002 21:45:28.648296  188218 main.go:141] libmachine: Using SSH client type: native
	I1002 21:45:28.648496  188218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 21:45:28.648504  188218 main.go:141] libmachine: About to run SSH command:
	sudo hostname first-449866 && echo "first-449866" | sudo tee /etc/hostname
	I1002 21:45:28.802634  188218 main.go:141] libmachine: SSH cmd err, output: <nil>: first-449866
	
	I1002 21:45:28.802712  188218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-449866
	I1002 21:45:28.820754  188218 main.go:141] libmachine: Using SSH client type: native
	I1002 21:45:28.820964  188218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 21:45:28.820978  188218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfirst-449866' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 first-449866/g' /etc/hosts;
				else 
					echo '127.0.1.1 first-449866' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 21:45:28.965568  188218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 21:45:28.965587  188218 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21682-80114/.minikube CaCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21682-80114/.minikube}
	I1002 21:45:28.965627  188218 ubuntu.go:190] setting up certificates
	I1002 21:45:28.965637  188218 provision.go:84] configureAuth start
	I1002 21:45:28.965703  188218 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-449866
	I1002 21:45:28.983077  188218 provision.go:143] copyHostCerts
	I1002 21:45:28.983123  188218 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem, removing ...
	I1002 21:45:28.983129  188218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem
	I1002 21:45:28.983196  188218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/ca.pem (1082 bytes)
	I1002 21:45:28.983315  188218 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem, removing ...
	I1002 21:45:28.983319  188218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem
	I1002 21:45:28.983346  188218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/cert.pem (1123 bytes)
	I1002 21:45:28.983397  188218 exec_runner.go:144] found /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem, removing ...
	I1002 21:45:28.983400  188218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem
	I1002 21:45:28.983430  188218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21682-80114/.minikube/key.pem (1675 bytes)
	I1002 21:45:28.983480  188218 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem org=jenkins.first-449866 san=[127.0.0.1 192.168.58.2 first-449866 localhost minikube]
	I1002 21:45:29.084351  188218 provision.go:177] copyRemoteCerts
	I1002 21:45:29.084406  188218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 21:45:29.084442  188218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-449866
	I1002 21:45:29.102180  188218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/first-449866/id_rsa Username:docker}
	I1002 21:45:29.205306  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 21:45:29.224656  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1002 21:45:29.242229  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 21:45:29.260050  188218 provision.go:87] duration metric: took 294.401547ms to configureAuth
	I1002 21:45:29.260069  188218 ubuntu.go:206] setting minikube options for container-runtime
	I1002 21:45:29.260215  188218 config.go:182] Loaded profile config "first-449866": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:45:29.260332  188218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-449866
	I1002 21:45:29.277809  188218 main.go:141] libmachine: Using SSH client type: native
	I1002 21:45:29.278003  188218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1002 21:45:29.278014  188218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1002 21:45:29.535542  188218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1002 21:45:29.535558  188218 machine.go:96] duration metric: took 1.067969261s to provisionDockerMachine
	I1002 21:45:29.535569  188218 client.go:171] duration metric: took 6.818411673s to LocalClient.Create
	I1002 21:45:29.535588  188218 start.go:167] duration metric: took 6.818481228s to libmachine.API.Create "first-449866"
	I1002 21:45:29.535594  188218 start.go:293] postStartSetup for "first-449866" (driver="docker")
	I1002 21:45:29.535606  188218 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 21:45:29.535683  188218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 21:45:29.535716  188218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-449866
	I1002 21:45:29.553546  188218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/first-449866/id_rsa Username:docker}
	I1002 21:45:29.658646  188218 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 21:45:29.662606  188218 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1002 21:45:29.662620  188218 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1002 21:45:29.662630  188218 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/addons for local assets ...
	I1002 21:45:29.662687  188218 filesync.go:126] Scanning /home/jenkins/minikube-integration/21682-80114/.minikube/files for local assets ...
	I1002 21:45:29.662786  188218 filesync.go:149] local asset: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem -> 841002.pem in /etc/ssl/certs
	I1002 21:45:29.662878  188218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 21:45:29.670784  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:45:29.691042  188218 start.go:296] duration metric: took 155.435743ms for postStartSetup
	I1002 21:45:29.691405  188218 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-449866
	I1002 21:45:29.709097  188218 profile.go:143] Saving config to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/config.json ...
	I1002 21:45:29.709361  188218 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 21:45:29.709397  188218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-449866
	I1002 21:45:29.726755  188218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/first-449866/id_rsa Username:docker}
	I1002 21:45:29.826136  188218 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1002 21:45:29.830471  188218 start.go:128] duration metric: took 7.11617078s to createHost
	I1002 21:45:29.830489  188218 start.go:83] releasing machines lock for "first-449866", held for 7.116275146s
	I1002 21:45:29.830547  188218 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" first-449866
	I1002 21:45:29.848481  188218 ssh_runner.go:195] Run: cat /version.json
	I1002 21:45:29.848530  188218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-449866
	I1002 21:45:29.848537  188218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 21:45:29.848592  188218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" first-449866
	I1002 21:45:29.866430  188218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/first-449866/id_rsa Username:docker}
	I1002 21:45:29.867429  188218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/first-449866/id_rsa Username:docker}
	I1002 21:45:30.034099  188218 ssh_runner.go:195] Run: systemctl --version
	I1002 21:45:30.040688  188218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1002 21:45:30.075393  188218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 21:45:30.080083  188218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 21:45:30.080136  188218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 21:45:30.106636  188218 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 21:45:30.106657  188218 start.go:495] detecting cgroup driver to use...
	I1002 21:45:30.106688  188218 detect.go:190] detected "systemd" cgroup driver on host os
	I1002 21:45:30.106727  188218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 21:45:30.122373  188218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 21:45:30.134107  188218 docker.go:218] disabling cri-docker service (if available) ...
	I1002 21:45:30.134159  188218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1002 21:45:30.150082  188218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1002 21:45:30.166798  188218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1002 21:45:30.247395  188218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1002 21:45:30.334136  188218 docker.go:234] disabling docker service ...
	I1002 21:45:30.334190  188218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1002 21:45:30.352508  188218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1002 21:45:30.365183  188218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1002 21:45:30.449699  188218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1002 21:45:30.530345  188218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1002 21:45:30.543210  188218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 21:45:30.557498  188218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1002 21:45:30.557546  188218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:45:30.567511  188218 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1002 21:45:30.567572  188218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:45:30.576215  188218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:45:30.584782  188218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:45:30.593394  188218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 21:45:30.601283  188218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:45:30.609761  188218 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:45:30.623805  188218 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1002 21:45:30.632823  188218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 21:45:30.640498  188218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 21:45:30.648637  188218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:45:30.726978  188218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1002 21:45:30.828055  188218 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1002 21:45:30.828107  188218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1002 21:45:30.832197  188218 start.go:563] Will wait 60s for crictl version
	I1002 21:45:30.832246  188218 ssh_runner.go:195] Run: which crictl
	I1002 21:45:30.836048  188218 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1002 21:45:30.864340  188218 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1002 21:45:30.864419  188218 ssh_runner.go:195] Run: crio --version
	I1002 21:45:30.892136  188218 ssh_runner.go:195] Run: crio --version
	I1002 21:45:30.922359  188218 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1002 21:45:30.923648  188218 cli_runner.go:164] Run: docker network inspect first-449866 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1002 21:45:30.940953  188218 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1002 21:45:30.945077  188218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:45:30.955013  188218 kubeadm.go:883] updating cluster {Name:first-449866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-449866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s} ...
	I1002 21:45:30.955119  188218 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1002 21:45:30.955160  188218 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:45:30.986411  188218 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:45:30.986424  188218 crio.go:433] Images already preloaded, skipping extraction
	I1002 21:45:30.986467  188218 ssh_runner.go:195] Run: sudo crictl images --output json
	I1002 21:45:31.012234  188218 crio.go:514] all images are preloaded for cri-o runtime.
	I1002 21:45:31.012247  188218 cache_images.go:85] Images are preloaded, skipping loading
	I1002 21:45:31.012253  188218 kubeadm.go:934] updating node { 192.168.58.2 8443 v1.34.1 crio true true} ...
	I1002 21:45:31.012361  188218 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=first-449866 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:first-449866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1002 21:45:31.012423  188218 ssh_runner.go:195] Run: crio config
	I1002 21:45:31.058671  188218 cni.go:84] Creating CNI manager for ""
	I1002 21:45:31.058684  188218 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1002 21:45:31.058707  188218 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1002 21:45:31.058733  188218 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:first-449866 NodeName:first-449866 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 21:45:31.058872  188218 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "first-449866"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.58.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 21:45:31.058931  188218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1002 21:45:31.067397  188218 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 21:45:31.067457  188218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 21:45:31.075133  188218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1002 21:45:31.087493  188218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 21:45:31.102824  188218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1002 21:45:31.115627  188218 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1002 21:45:31.119326  188218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 21:45:31.129131  188218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 21:45:31.208404  188218 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1002 21:45:31.234204  188218 certs.go:69] Setting up /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866 for IP: 192.168.58.2
	I1002 21:45:31.234219  188218 certs.go:195] generating shared ca certs ...
	I1002 21:45:31.234237  188218 certs.go:227] acquiring lock for ca certs: {Name:mk4f6af95c97eaf44ef2b15b9215876ac2b1c9f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:31.234404  188218 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key
	I1002 21:45:31.234446  188218 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key
	I1002 21:45:31.234465  188218 certs.go:257] generating profile certs ...
	I1002 21:45:31.234530  188218 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/client.key
	I1002 21:45:31.234550  188218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/client.crt with IP's: []
	I1002 21:45:31.392926  188218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/client.crt ...
	I1002 21:45:31.392945  188218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/client.crt: {Name:mk910b8f09bf259c8c7718d290fc7ff526f34f88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:31.393132  188218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/client.key ...
	I1002 21:45:31.393140  188218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/client.key: {Name:mk480da4ca6049df4787b81d1d4f496391338ed3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:31.393235  188218 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.key.f74e9f42
	I1002 21:45:31.393247  188218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.crt.f74e9f42 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.58.2]
	I1002 21:45:31.716984  188218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.crt.f74e9f42 ...
	I1002 21:45:31.717001  188218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.crt.f74e9f42: {Name:mk80ae50cbb1f5bc4985336d1f144a1393d964e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:31.717168  188218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.key.f74e9f42 ...
	I1002 21:45:31.717176  188218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.key.f74e9f42: {Name:mkdced564d7a299112585cedb04f7e33ce2c66ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:31.717259  188218 certs.go:382] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.crt.f74e9f42 -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.crt
	I1002 21:45:31.717327  188218 certs.go:386] copying /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.key.f74e9f42 -> /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.key
	I1002 21:45:31.717375  188218 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/proxy-client.key
	I1002 21:45:31.717385  188218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/proxy-client.crt with IP's: []
	I1002 21:45:32.018595  188218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/proxy-client.crt ...
	I1002 21:45:32.018613  188218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/proxy-client.crt: {Name:mk0bbb2b23b0c4197c42df3c817067e191a52972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:32.018809  188218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/proxy-client.key ...
	I1002 21:45:32.018822  188218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/proxy-client.key: {Name:mk69e2d00db38263987e6f44ccf9850423606329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 21:45:32.019016  188218 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem (1338 bytes)
	W1002 21:45:32.019047  188218 certs.go:480] ignoring /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100_empty.pem, impossibly tiny 0 bytes
	I1002 21:45:32.019053  188218 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca-key.pem (1675 bytes)
	I1002 21:45:32.019075  188218 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/ca.pem (1082 bytes)
	I1002 21:45:32.019095  188218 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/cert.pem (1123 bytes)
	I1002 21:45:32.019114  188218 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/certs/key.pem (1675 bytes)
	I1002 21:45:32.019166  188218 certs.go:484] found cert: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem (1708 bytes)
	I1002 21:45:32.019795  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 21:45:32.038003  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1002 21:45:32.055069  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 21:45:32.071867  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 21:45:32.088918  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1002 21:45:32.106612  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 21:45:32.123815  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 21:45:32.141266  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/first-449866/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 21:45:32.158417  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 21:45:32.177972  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/certs/84100.pem --> /usr/share/ca-certificates/84100.pem (1338 bytes)
	I1002 21:45:32.194777  188218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/ssl/certs/841002.pem --> /usr/share/ca-certificates/841002.pem (1708 bytes)
	I1002 21:45:32.211962  188218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 21:45:32.224346  188218 ssh_runner.go:195] Run: openssl version
	I1002 21:45:32.230332  188218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/841002.pem && ln -fs /usr/share/ca-certificates/841002.pem /etc/ssl/certs/841002.pem"
	I1002 21:45:32.238818  188218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/841002.pem
	I1002 21:45:32.242578  188218 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  2 20:40 /usr/share/ca-certificates/841002.pem
	I1002 21:45:32.242629  188218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/841002.pem
	I1002 21:45:32.276486  188218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/841002.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 21:45:32.285295  188218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 21:45:32.293796  188218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:45:32.297619  188218 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  2 20:23 /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:45:32.297658  188218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 21:45:32.331505  188218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 21:45:32.340231  188218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/84100.pem && ln -fs /usr/share/ca-certificates/84100.pem /etc/ssl/certs/84100.pem"
	I1002 21:45:32.348559  188218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/84100.pem
	I1002 21:45:32.352393  188218 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  2 20:40 /usr/share/ca-certificates/84100.pem
	I1002 21:45:32.352443  188218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/84100.pem
	I1002 21:45:32.386385  188218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/84100.pem /etc/ssl/certs/51391683.0"
	I1002 21:45:32.395426  188218 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1002 21:45:32.399044  188218 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1002 21:45:32.399082  188218 kubeadm.go:400] StartCluster: {Name:first-449866 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:first-449866 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Aut
oPauseInterval:1m0s}
	I1002 21:45:32.399134  188218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1002 21:45:32.399172  188218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1002 21:45:32.427658  188218 cri.go:89] found id: ""
	I1002 21:45:32.427775  188218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 21:45:32.435797  188218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 21:45:32.443525  188218 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:45:32.443607  188218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:45:32.451260  188218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:45:32.451269  188218 kubeadm.go:157] found existing configuration files:
	
	I1002 21:45:32.451324  188218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:45:32.459562  188218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:45:32.459616  188218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:45:32.466986  188218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:45:32.474442  188218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:45:32.474480  188218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:45:32.481862  188218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:45:32.489080  188218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:45:32.489120  188218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:45:32.496100  188218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:45:32.503562  188218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:45:32.503601  188218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:45:32.510853  188218 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:45:32.549213  188218 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:45:32.549268  188218 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:45:32.579050  188218 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:45:32.579137  188218 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:45:32.579179  188218 kubeadm.go:318] OS: Linux
	I1002 21:45:32.579237  188218 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:45:32.579340  188218 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:45:32.579419  188218 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:45:32.579490  188218 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:45:32.579550  188218 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:45:32.579608  188218 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:45:32.579668  188218 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:45:32.579764  188218 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:45:32.637146  188218 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:45:32.637337  188218 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:45:32.637501  188218 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:45:32.645461  188218 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:45:32.647331  188218 out.go:252]   - Generating certificates and keys ...
	I1002 21:45:32.647396  188218 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:45:32.647465  188218 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:45:33.058322  188218 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1002 21:45:33.671949  188218 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1002 21:45:33.815663  188218 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1002 21:45:34.310381  188218 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1002 21:45:34.449432  188218 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1002 21:45:34.449606  188218 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [first-449866 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 21:45:34.634025  188218 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1002 21:45:34.634129  188218 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [first-449866 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1002 21:45:34.846794  188218 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1002 21:45:35.056873  188218 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1002 21:45:35.256238  188218 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1002 21:45:35.256310  188218 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:45:35.602436  188218 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:45:35.986837  188218 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:45:36.041717  188218 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:45:36.174693  188218 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:45:36.263893  188218 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:45:36.265050  188218 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:45:36.268988  188218 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:45:36.270541  188218 out.go:252]   - Booting up control plane ...
	I1002 21:45:36.270662  188218 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:45:36.270786  188218 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:45:36.271202  188218 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:45:36.285086  188218 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:45:36.285208  188218 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:45:36.291388  188218 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:45:36.291721  188218 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:45:36.291797  188218 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:45:36.387880  188218 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:45:36.388026  188218 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:45:37.388889  188218 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001132175s
	I1002 21:45:37.391506  188218 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:45:37.391618  188218 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1002 21:45:37.391764  188218 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:45:37.391859  188218 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:49:37.393188  188218 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000644906s
	I1002 21:49:37.393418  188218 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000598124s
	I1002 21:49:37.393621  188218 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000649583s
	I1002 21:49:37.393653  188218 kubeadm.go:318] 
	I1002 21:49:37.393878  188218 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:49:37.394061  188218 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:49:37.394272  188218 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:49:37.394463  188218 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:49:37.394592  188218 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:49:37.394742  188218 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:49:37.394765  188218 kubeadm.go:318] 
	I1002 21:49:37.397010  188218 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:49:37.397247  188218 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:49:37.397881  188218 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 21:49:37.397960  188218 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1002 21:49:37.398137  188218 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [first-449866 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [first-449866 localhost] and IPs [192.168.58.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001132175s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000644906s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000598124s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000649583s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1002 21:49:37.398210  188218 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1002 21:49:37.846553  188218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 21:49:37.859257  188218 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1002 21:49:37.859315  188218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 21:49:37.867303  188218 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 21:49:37.867312  188218 kubeadm.go:157] found existing configuration files:
	
	I1002 21:49:37.867354  188218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1002 21:49:37.875290  188218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1002 21:49:37.875344  188218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1002 21:49:37.882950  188218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1002 21:49:37.891270  188218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1002 21:49:37.891318  188218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1002 21:49:37.898610  188218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1002 21:49:37.906252  188218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1002 21:49:37.906312  188218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1002 21:49:37.913351  188218 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1002 21:49:37.920843  188218 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1002 21:49:37.920884  188218 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1002 21:49:37.927962  188218 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1002 21:49:37.983907  188218 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1002 21:49:38.041448  188218 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1002 21:53:39.897396  188218 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1002 21:53:39.897620  188218 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1002 21:53:39.900602  188218 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1002 21:53:39.900702  188218 kubeadm.go:318] [preflight] Running pre-flight checks
	I1002 21:53:39.900850  188218 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1002 21:53:39.900932  188218 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1002 21:53:39.900986  188218 kubeadm.go:318] OS: Linux
	I1002 21:53:39.901023  188218 kubeadm.go:318] CGROUPS_CPU: enabled
	I1002 21:53:39.901063  188218 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1002 21:53:39.901099  188218 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1002 21:53:39.901142  188218 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1002 21:53:39.901178  188218 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1002 21:53:39.901220  188218 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1002 21:53:39.901256  188218 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1002 21:53:39.901289  188218 kubeadm.go:318] CGROUPS_IO: enabled
	I1002 21:53:39.901345  188218 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1002 21:53:39.901506  188218 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1002 21:53:39.901601  188218 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1002 21:53:39.901672  188218 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1002 21:53:39.904340  188218 out.go:252]   - Generating certificates and keys ...
	I1002 21:53:39.904420  188218 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1002 21:53:39.904469  188218 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1002 21:53:39.904529  188218 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1002 21:53:39.904597  188218 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1002 21:53:39.904677  188218 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1002 21:53:39.904726  188218 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1002 21:53:39.904816  188218 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1002 21:53:39.904867  188218 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1002 21:53:39.904937  188218 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1002 21:53:39.904998  188218 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1002 21:53:39.905028  188218 kubeadm.go:318] [certs] Using the existing "sa" key
	I1002 21:53:39.905079  188218 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1002 21:53:39.905117  188218 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1002 21:53:39.905159  188218 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1002 21:53:39.905216  188218 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1002 21:53:39.905298  188218 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1002 21:53:39.905355  188218 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1002 21:53:39.905420  188218 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1002 21:53:39.905484  188218 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1002 21:53:39.906763  188218 out.go:252]   - Booting up control plane ...
	I1002 21:53:39.906837  188218 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1002 21:53:39.906900  188218 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1002 21:53:39.906954  188218 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1002 21:53:39.907043  188218 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1002 21:53:39.907119  188218 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1002 21:53:39.907207  188218 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1002 21:53:39.907280  188218 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1002 21:53:39.907320  188218 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1002 21:53:39.907426  188218 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1002 21:53:39.907517  188218 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1002 21:53:39.907565  188218 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.01597ms
	I1002 21:53:39.907639  188218 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1002 21:53:39.907710  188218 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	I1002 21:53:39.907799  188218 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1002 21:53:39.907868  188218 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1002 21:53:39.907952  188218 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000115215s
	I1002 21:53:39.908052  188218 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000144096s
	I1002 21:53:39.908130  188218 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000455592s
	I1002 21:53:39.908133  188218 kubeadm.go:318] 
	I1002 21:53:39.908207  188218 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1002 21:53:39.908272  188218 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1002 21:53:39.908363  188218 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1002 21:53:39.908475  188218 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1002 21:53:39.908586  188218 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1002 21:53:39.908700  188218 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1002 21:53:39.908756  188218 kubeadm.go:318] 
	I1002 21:53:39.908790  188218 kubeadm.go:402] duration metric: took 8m7.509709789s to StartCluster
	I1002 21:53:39.908847  188218 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1002 21:53:39.908915  188218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1002 21:53:39.935768  188218 cri.go:89] found id: ""
	I1002 21:53:39.935812  188218 logs.go:282] 0 containers: []
	W1002 21:53:39.935824  188218 logs.go:284] No container was found matching "kube-apiserver"
	I1002 21:53:39.935833  188218 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1002 21:53:39.935905  188218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1002 21:53:39.961472  188218 cri.go:89] found id: ""
	I1002 21:53:39.961491  188218 logs.go:282] 0 containers: []
	W1002 21:53:39.961499  188218 logs.go:284] No container was found matching "etcd"
	I1002 21:53:39.961511  188218 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1002 21:53:39.961571  188218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1002 21:53:39.988310  188218 cri.go:89] found id: ""
	I1002 21:53:39.988350  188218 logs.go:282] 0 containers: []
	W1002 21:53:39.988359  188218 logs.go:284] No container was found matching "coredns"
	I1002 21:53:39.988364  188218 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1002 21:53:39.988422  188218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1002 21:53:40.013610  188218 cri.go:89] found id: ""
	I1002 21:53:40.013634  188218 logs.go:282] 0 containers: []
	W1002 21:53:40.013642  188218 logs.go:284] No container was found matching "kube-scheduler"
	I1002 21:53:40.013647  188218 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1002 21:53:40.013694  188218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1002 21:53:40.039879  188218 cri.go:89] found id: ""
	I1002 21:53:40.039895  188218 logs.go:282] 0 containers: []
	W1002 21:53:40.039901  188218 logs.go:284] No container was found matching "kube-proxy"
	I1002 21:53:40.039906  188218 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1002 21:53:40.039961  188218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1002 21:53:40.065879  188218 cri.go:89] found id: ""
	I1002 21:53:40.065897  188218 logs.go:282] 0 containers: []
	W1002 21:53:40.065907  188218 logs.go:284] No container was found matching "kube-controller-manager"
	I1002 21:53:40.065913  188218 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1002 21:53:40.065971  188218 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1002 21:53:40.092573  188218 cri.go:89] found id: ""
	I1002 21:53:40.092588  188218 logs.go:282] 0 containers: []
	W1002 21:53:40.092594  188218 logs.go:284] No container was found matching "kindnet"
	I1002 21:53:40.092603  188218 logs.go:123] Gathering logs for describe nodes ...
	I1002 21:53:40.092614  188218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1002 21:53:40.151123  188218 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:53:40.144016    2412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:40.144466    2412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:40.146110    2412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:40.146517    2412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:40.148104    2412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1002 21:53:40.144016    2412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:40.144466    2412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:40.146110    2412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:40.146517    2412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:40.148104    2412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1002 21:53:40.151153  188218 logs.go:123] Gathering logs for CRI-O ...
	I1002 21:53:40.151167  188218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1002 21:53:40.213389  188218 logs.go:123] Gathering logs for container status ...
	I1002 21:53:40.213411  188218 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1002 21:53:40.241340  188218 logs.go:123] Gathering logs for kubelet ...
	I1002 21:53:40.241358  188218 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1002 21:53:40.305622  188218 logs.go:123] Gathering logs for dmesg ...
	I1002 21:53:40.305647  188218 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1002 21:53:40.320396  188218 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.01597ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000115215s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000144096s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000455592s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1002 21:53:40.320444  188218 out.go:285] * 
	W1002 21:53:40.320531  188218 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.01597ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000115215s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000144096s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000455592s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:53:40.320549  188218 out.go:285] * 
	W1002 21:53:40.322345  188218 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 21:53:40.325675  188218 out.go:203] 
	W1002 21:53:40.326837  188218 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 6.8.0-1041-gcp
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_IO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 502.01597ms
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.58.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000115215s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000144096s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000455592s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.58.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.58.2:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1002 21:53:40.326871  188218 out.go:285] * 
	I1002 21:53:40.328072  188218 out.go:203] 
	
	
	==> CRI-O <==
	Oct 02 21:53:31 first-449866 crio[779]: time="2025-10-02T21:53:31.793145295Z" level=info msg="createCtr: removing container 94a1afda4d2233cea79fcbdf5da0d594bfc8e623d8c45ac4a48aeb1814674829" id=a7f0c92d-a2ad-4a5f-8d39-842258a10abb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:31 first-449866 crio[779]: time="2025-10-02T21:53:31.79317517Z" level=info msg="createCtr: deleting container 94a1afda4d2233cea79fcbdf5da0d594bfc8e623d8c45ac4a48aeb1814674829 from storage" id=a7f0c92d-a2ad-4a5f-8d39-842258a10abb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:31 first-449866 crio[779]: time="2025-10-02T21:53:31.79522044Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-first-449866_kube-system_4c54d5444fee5c82c250a2bca8a5d4cf_0" id=a7f0c92d-a2ad-4a5f-8d39-842258a10abb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:33 first-449866 crio[779]: time="2025-10-02T21:53:33.769555774Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=2bf215c8-5156-44be-82db-e097b086afb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:53:33 first-449866 crio[779]: time="2025-10-02T21:53:33.770467285Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=8749a92c-4aeb-4eb1-8207-56a0b7288e0e name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:53:33 first-449866 crio[779]: time="2025-10-02T21:53:33.771252988Z" level=info msg="Creating container: kube-system/kube-controller-manager-first-449866/kube-controller-manager" id=0a5848d5-20d1-4a17-869d-ab0fbe42d0d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:33 first-449866 crio[779]: time="2025-10-02T21:53:33.771446603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:53:33 first-449866 crio[779]: time="2025-10-02T21:53:33.776103401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:53:33 first-449866 crio[779]: time="2025-10-02T21:53:33.776868938Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:53:33 first-449866 crio[779]: time="2025-10-02T21:53:33.793212949Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=0a5848d5-20d1-4a17-869d-ab0fbe42d0d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:33 first-449866 crio[779]: time="2025-10-02T21:53:33.794635985Z" level=info msg="createCtr: deleting container ID 77b2b5ad43ba3ea507fe93f7400e1873b8f7197e5e33e2f87d3813edf60841ab from idIndex" id=0a5848d5-20d1-4a17-869d-ab0fbe42d0d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:33 first-449866 crio[779]: time="2025-10-02T21:53:33.794676269Z" level=info msg="createCtr: removing container 77b2b5ad43ba3ea507fe93f7400e1873b8f7197e5e33e2f87d3813edf60841ab" id=0a5848d5-20d1-4a17-869d-ab0fbe42d0d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:33 first-449866 crio[779]: time="2025-10-02T21:53:33.794709384Z" level=info msg="createCtr: deleting container 77b2b5ad43ba3ea507fe93f7400e1873b8f7197e5e33e2f87d3813edf60841ab from storage" id=0a5848d5-20d1-4a17-869d-ab0fbe42d0d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:33 first-449866 crio[779]: time="2025-10-02T21:53:33.796905499Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-first-449866_kube-system_87206e6c73d7878e4c57d437866feb40_0" id=0a5848d5-20d1-4a17-869d-ab0fbe42d0d6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:37 first-449866 crio[779]: time="2025-10-02T21:53:37.769731484Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=593f7ed2-b5b1-4f79-8263-4d9cbc683b91 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:53:37 first-449866 crio[779]: time="2025-10-02T21:53:37.770812613Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=78c16357-d16c-4909-8ebe-9347e23fc818 name=/runtime.v1.ImageService/ImageStatus
	Oct 02 21:53:37 first-449866 crio[779]: time="2025-10-02T21:53:37.771803141Z" level=info msg="Creating container: kube-system/etcd-first-449866/etcd" id=6b7bca1b-7dbc-48a0-9a63-0268c58dd3da name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:37 first-449866 crio[779]: time="2025-10-02T21:53:37.772025636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:53:37 first-449866 crio[779]: time="2025-10-02T21:53:37.775312586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:53:37 first-449866 crio[779]: time="2025-10-02T21:53:37.775708777Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 02 21:53:37 first-449866 crio[779]: time="2025-10-02T21:53:37.790912489Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=6b7bca1b-7dbc-48a0-9a63-0268c58dd3da name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:37 first-449866 crio[779]: time="2025-10-02T21:53:37.792330105Z" level=info msg="createCtr: deleting container ID a1f9a2d5dc5ce1eed2f2c19d10f08ec859d2dd8f3764e27d674994ca2afb3198 from idIndex" id=6b7bca1b-7dbc-48a0-9a63-0268c58dd3da name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:37 first-449866 crio[779]: time="2025-10-02T21:53:37.792368736Z" level=info msg="createCtr: removing container a1f9a2d5dc5ce1eed2f2c19d10f08ec859d2dd8f3764e27d674994ca2afb3198" id=6b7bca1b-7dbc-48a0-9a63-0268c58dd3da name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:37 first-449866 crio[779]: time="2025-10-02T21:53:37.792402117Z" level=info msg="createCtr: deleting container a1f9a2d5dc5ce1eed2f2c19d10f08ec859d2dd8f3764e27d674994ca2afb3198 from storage" id=6b7bca1b-7dbc-48a0-9a63-0268c58dd3da name=/runtime.v1.RuntimeService/CreateContainer
	Oct 02 21:53:37 first-449866 crio[779]: time="2025-10-02T21:53:37.794710261Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-first-449866_kube-system_be8519a58a225b11a68f2ad0be49fb17_0" id=6b7bca1b-7dbc-48a0-9a63-0268c58dd3da name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1002 21:53:41.452277    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:41.452958    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:41.454665    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:41.456153    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1002 21:53:41.456545    2569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 2 18:17] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.400703] i8042: Warning: Keylock active
	[  +0.013385] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004196] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001059] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000902] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000938] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000781] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000832] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000680] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000813] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.515329] block sda: the capability attribute has been deprecated.
	[  +0.092013] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.028089] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.700624] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> kernel <==
	 21:53:41 up  3:36,  0 user,  load average: 0.03, 0.16, 0.19
	Linux first-449866 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 02 21:53:31 first-449866 kubelet[1803]: E1002 21:53:31.795659    1803 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:53:31 first-449866 kubelet[1803]:         container kube-apiserver start failed in pod kube-apiserver-first-449866_kube-system(4c54d5444fee5c82c250a2bca8a5d4cf): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:53:31 first-449866 kubelet[1803]:  > logger="UnhandledError"
	Oct 02 21:53:31 first-449866 kubelet[1803]: E1002 21:53:31.795702    1803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-first-449866" podUID="4c54d5444fee5c82c250a2bca8a5d4cf"
	Oct 02 21:53:33 first-449866 kubelet[1803]: E1002 21:53:33.769131    1803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-449866\" not found" node="first-449866"
	Oct 02 21:53:33 first-449866 kubelet[1803]: E1002 21:53:33.797220    1803 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:53:33 first-449866 kubelet[1803]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:53:33 first-449866 kubelet[1803]:  > podSandboxID="7f137cf0a31b94a62b4ac7ce34b776ff2f7a8685b1cb726f4fa0811cb4adda87"
	Oct 02 21:53:33 first-449866 kubelet[1803]: E1002 21:53:33.797314    1803 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:53:33 first-449866 kubelet[1803]:         container kube-controller-manager start failed in pod kube-controller-manager-first-449866_kube-system(87206e6c73d7878e4c57d437866feb40): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:53:33 first-449866 kubelet[1803]:  > logger="UnhandledError"
	Oct 02 21:53:33 first-449866 kubelet[1803]: E1002 21:53:33.797348    1803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-first-449866" podUID="87206e6c73d7878e4c57d437866feb40"
	Oct 02 21:53:36 first-449866 kubelet[1803]: E1002 21:53:36.394118    1803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.58.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/first-449866?timeout=10s\": dial tcp 192.168.58.2:8443: connect: connection refused" interval="7s"
	Oct 02 21:53:36 first-449866 kubelet[1803]: I1002 21:53:36.549305    1803 kubelet_node_status.go:75] "Attempting to register node" node="first-449866"
	Oct 02 21:53:36 first-449866 kubelet[1803]: E1002 21:53:36.549683    1803 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.58.2:8443/api/v1/nodes\": dial tcp 192.168.58.2:8443: connect: connection refused" node="first-449866"
	Oct 02 21:53:37 first-449866 kubelet[1803]: E1002 21:53:37.769225    1803 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"first-449866\" not found" node="first-449866"
	Oct 02 21:53:37 first-449866 kubelet[1803]: E1002 21:53:37.795036    1803 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 02 21:53:37 first-449866 kubelet[1803]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:53:37 first-449866 kubelet[1803]:  > podSandboxID="764c76f2f26f0c9cd1687c54a0dc4e773d09fd59aa59d1858650c64bcebab3d4"
	Oct 02 21:53:37 first-449866 kubelet[1803]: E1002 21:53:37.795137    1803 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 02 21:53:37 first-449866 kubelet[1803]:         container etcd start failed in pod etcd-first-449866_kube-system(be8519a58a225b11a68f2ad0be49fb17): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 02 21:53:37 first-449866 kubelet[1803]:  > logger="UnhandledError"
	Oct 02 21:53:37 first-449866 kubelet[1803]: E1002 21:53:37.795169    1803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-first-449866" podUID="be8519a58a225b11a68f2ad0be49fb17"
	Oct 02 21:53:39 first-449866 kubelet[1803]: E1002 21:53:39.783383    1803 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"first-449866\" not found"
	Oct 02 21:53:40 first-449866 kubelet[1803]: E1002 21:53:40.288688    1803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.58.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.58.2:8443: connect: connection refused" event="&Event{ObjectMeta:{first-449866.186acafa0da5e715  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:first-449866,UID:first-449866,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node first-449866 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:first-449866,},FirstTimestamp:2025-10-02 21:49:39.761211157 +0000 UTC m=+0.369879743,LastTimestamp:2025-10-02 21:49:39.761211157 +0000 UTC m=+0.369879743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:first-449866,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p first-449866 -n first-449866
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p first-449866 -n first-449866: exit status 6 (300.651167ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 21:53:41.842940  193632 status.go:458] kubeconfig endpoint: get endpoint: "first-449866" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "first-449866" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "first-449866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-449866
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-449866: (1.896717663s)
--- FAIL: TestMinikubeProfile (501.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (7200.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-445866
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-445866-m01 --driver=docker  --container-runtime=crio
E1002 22:18:25.860577   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1002 22:22:02.783153   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic: test timed out after 2h0m0s
	running tests:
		TestMultiNode (28m40s)
		TestMultiNode/serial (28m40s)
		TestMultiNode/serial/ValidateNameConflict (4m43s)

                                                
                                                
goroutine 2097 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 29 minutes]:
testing.(*T).Run(0xc000582a80, {0x32034db?, 0xc000821a88?}, 0x3c51e10)
	/usr/local/go/src/testing/testing.go:1859 +0x431
testing.runTests.func1(0xc000582a80)
	/usr/local/go/src/testing/testing.go:2279 +0x37
testing.tRunner(0xc000582a80, 0xc000821bc8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
testing.runTests(0xc000596060, {0x5c616c0, 0x2c, 0x2c}, {0xffffffffffffffff?, 0xc0003c4340?, 0x5c89dc0?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc00065c960)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00065c960)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xdb
main.main()
	_testmain.go:133 +0xa8

                                                
                                                
goroutine 78 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc000103500)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000103500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestOffline(0xc000103500)
	/home/jenkins/workspace/Build_Cross/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc000103500, 0x3c51e28)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 145 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000602380)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000602380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc000602380)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:36 +0x87
testing.tRunner(0xc000602380, 0x3c51d28)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2101 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x79fc70641a80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0017828a0?, 0xc0005db746?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017828a0, {0xc0005db746, 0x8ba, 0x8ba})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005a00b0, {0xc0005db746?, 0x41835f?, 0x2c42f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000870360, {0x3f63640, 0xc0000bc170})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f637c0, 0xc000870360}, {0x3f63640, 0xc0000bc170}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0005a00b0?, {0x3f637c0, 0xc000870360})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0005a00b0, {0x3f637c0, 0xc000870360})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f637c0, 0xc000870360}, {0x3f636c0, 0xc0005a00b0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0009a2080?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2081
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                                
goroutine 531 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3fae230, 0xc0000844d0}, 0xc000c90f50, 0xc0000caf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3fae230, 0xc0000844d0}, 0x50?, 0xc000c90f50, 0xc000c90f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3fae230?, 0xc0000844d0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x593245?, 0xc0020fe480?, 0xc0006d5650?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 522
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x286

                                                
                                                
goroutine 162 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000602540)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000602540)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc000602540)
	/home/jenkins/workspace/Build_Cross/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc000602540, 0x3c51d20)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 164 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000505340)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000505340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc000505340)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:83 +0x87
testing.tRunner(0xc000505340, 0x3c51d70)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 165 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000505500)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000505500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc000505500)
	/home/jenkins/workspace/Build_Cross/test/integration/docker_test.go:146 +0x87
testing.tRunner(0xc000505500, 0x3c51d68)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 167 [chan receive, 111 minutes]:
testing.(*T).Parallel(0xc000505c00)
	/usr/local/go/src/testing/testing.go:1577 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000505c00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x34
k8s.io/minikube/test/integration.TestKVMDriverInstallOrUpdate(0xc000505c00)
	/home/jenkins/workspace/Build_Cross/test/integration/driver_install_or_update_test.go:48 +0x87
testing.tRunner(0xc000505c00, 0x3c51db8)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 1824 [chan receive, 29 minutes]:
testing.(*T).Run(0xc0017c2fc0, {0x31f3138?, 0x1a3185c5000?}, 0xc0009e5260)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode(0xc0017c2fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:59 +0x367
testing.tRunner(0xc0017c2fc0, 0x3c51e10)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 235 [IO wait, 102 minutes]:
internal/poll.runtime_pollWait(0x79fc70641dc8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0002e1500?, 0x900000036?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0002e1500)
	/usr/local/go/src/internal/poll/fd_unix.go:620 +0x295
net.(*netFD).accept(0xc0002e1500)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0006deac0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc0006deac0)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc0001ff300, {0x3f9b790, 0xc0006deac0})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc0001ff300)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 232
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x129

                                                
                                                
goroutine 667 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001ca8300, 0xc000c74380)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 666
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 530 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0006dfd50, 0x23)
	/usr/local/go/src/runtime/sema.go:597 +0x159
sync.(*Cond).Wait(0xc000c61ce0?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3fc3d20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001783080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4c5c93?, 0xc0006b0cc0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x13
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x3fae230?, 0xc0000844d0?}, 0x41b1b4?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x51
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x3fae230, 0xc0000844d0}, 0xc000c61f50, {0x3f65240, 0xc000c10cf0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xe5
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x3f65240?, 0xc000c10cf0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x46
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000d02980, 0x3b9aca00, 0x0, 0x1, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 522
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x1d9

                                                
                                                
goroutine 522 [chan receive, 75 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0xc001783080, 0xc0000844d0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x295
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 520
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x614

                                                
                                                
goroutine 521 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3fc0920, {{0x3fb5948, 0xc0002483c0?}, 0xc0005df890?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x378
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 520
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x272

                                                
                                                
goroutine 532 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 531
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 695 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c12780, 0xc000ce2a80)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 694
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 802 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc0017b4f00, 0xc00178a070)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 393
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 1777 [chan receive, 4 minutes]:
testing.(*T).Run(0xc000c77180, {0x3218126?, 0x40962a4?}, 0xc000c20040)
	/usr/local/go/src/testing/testing.go:1859 +0x431
k8s.io/minikube/test/integration.TestMultiNode.func1(0xc000c77180)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:84 +0x17d
testing.tRunner(0xc000c77180, 0xc0009e5260)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1824
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2081 [syscall, 4 minutes]:
syscall.Syscall6(0xf7, 0x3, 0xd, 0xc000823a08, 0x4, 0xc0006a2ab0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:95 +0x39
internal/syscall/unix.Waitid(0xc000823a36?, 0xc000823b60?, 0x5930ab?, 0x7fff2c1941ab?, 0x0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x39
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:106
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:251
os.(*Process).pidfdWait(0xc000596258?)
	/usr/local/go/src/os/pidfd_linux.go:105 +0x209
os.(*Process).wait(0xc000680808?)
	/usr/local/go/src/os/exec_unix.go:27 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0002d4780)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc0002d4780)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc0017c2c40, 0xc0002d4780)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateNameConflict({0x3fadeb0, 0xc0002da1c0}, 0xc0017c2c40, {0xc0016a6380, 0x10})
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:464 +0x48d
k8s.io/minikube/test/integration.TestMultiNode.func1.1(0xc0017c2c40?)
	/home/jenkins/workspace/Build_Cross/test/integration/multinode_test.go:86 +0x6b
testing.tRunner(0xc0017c2c40, 0xc000c20040)
	/usr/local/go/src/testing/testing.go:1792 +0xf4
created by testing.(*T).Run in goroutine 1777
	/usr/local/go/src/testing/testing.go:1851 +0x413

                                                
                                                
goroutine 2102 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc0002d4780, 0xc000c742a0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2081
	/usr/local/go/src/os/exec/exec.go:775 +0x8f3

                                                
                                                
goroutine 2100 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x79fc185bd908, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xc0017827e0?, 0xc0009dca8f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0017827e0, {0xc0009dca8f, 0x571, 0x571})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005a0098, {0xc0009dca8f?, 0x41835f?, 0x2c42f20?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc000870330, {0x3f63640, 0xc000692080})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3f637c0, 0xc000870330}, {0x3f63640, 0xc000692080}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0005a0098?, {0x3f637c0, 0xc000870330})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0005a0098, {0x3f637c0, 0xc000870330})
	/usr/local/go/src/os/file.go:253 +0x9c
io.copyBuffer({0x3f637c0, 0xc000870330}, {0x3f636c0, 0xc0005a0098}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc00178a1c0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2081
	/usr/local/go/src/os/exec/exec.go:748 +0x92b

                                                
                                    

Test pass (92/166)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 3.92
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 3.46
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.8
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
39 TestErrorSpam/start 0.61
40 TestErrorSpam/status 0.86
41 TestErrorSpam/pause 1.31
42 TestErrorSpam/unpause 1.31
43 TestErrorSpam/stop 1.4
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
50 TestFunctional/serial/KubeContext 0.05
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.01
55 TestFunctional/serial/CacheCmd/cache/add_local 1.16
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
57 TestFunctional/serial/CacheCmd/cache/list 0.05
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
60 TestFunctional/serial/CacheCmd/cache/delete 0.1
65 TestFunctional/serial/LogsCmd 0.89
66 TestFunctional/serial/LogsFileCmd 0.9
69 TestFunctional/parallel/ConfigCmd 0.38
71 TestFunctional/parallel/DryRun 0.42
72 TestFunctional/parallel/InternationalLanguage 0.16
78 TestFunctional/parallel/AddonsCmd 0.15
81 TestFunctional/parallel/SSHCmd 0.62
82 TestFunctional/parallel/CpCmd 1.99
84 TestFunctional/parallel/FileSync 0.27
85 TestFunctional/parallel/CertSync 1.66
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
93 TestFunctional/parallel/License 0.44
102 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
107 TestFunctional/parallel/ProfileCmd/profile_list 0.39
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
109 TestFunctional/parallel/Version/short 0.05
110 TestFunctional/parallel/Version/components 0.5
111 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
112 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
113 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
114 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
115 TestFunctional/parallel/ImageCommands/ImageBuild 2.21
116 TestFunctional/parallel/ImageCommands/Setup 0.98
121 TestFunctional/parallel/MountCmd/specific-port 1.65
123 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
126 TestFunctional/parallel/MountCmd/VerifyCleanup 1.69
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/delete_echo-server_images 0.04
135 TestFunctional/delete_my-image_image 0.02
136 TestFunctional/delete_minikube_cached_images 0.02
164 TestJSONOutput/start/Audit 0
169 TestJSONOutput/pause/Command 0.46
170 TestJSONOutput/pause/Audit 0
172 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/unpause/Command 0.44
176 TestJSONOutput/unpause/Audit 0
178 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/stop/Command 1.22
182 TestJSONOutput/stop/Audit 0
184 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
186 TestErrorJSONOutput 0.21
188 TestKicCustomNetwork/create_custom_network 28.75
189 TestKicCustomNetwork/use_default_bridge_network 23.69
190 TestKicExistingNetwork 24
191 TestKicCustomSubnet 27.67
192 TestKicStaticIP 26.12
193 TestMainNoArgs 0.05
197 TestMountStart/serial/StartWithMountFirst 5.97
198 TestMountStart/serial/VerifyMountFirst 0.27
199 TestMountStart/serial/StartWithMountSecond 6.13
200 TestMountStart/serial/VerifyMountSecond 0.26
201 TestMountStart/serial/DeleteFirst 1.67
202 TestMountStart/serial/VerifyMountPostDelete 0.26
203 TestMountStart/serial/Stop 1.2
204 TestMountStart/serial/RestartStopped 7.75
205 TestMountStart/serial/VerifyMountPostStop 0.27
x
+
TestDownloadOnly/v1.28.0/json-events (3.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-887627 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-887627 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.921696183s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (3.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1002 20:22:52.901084   84100 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1002 20:22:52.901212   84100 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-887627
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-887627: exit status 85 (63.734968ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-887627 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-887627 │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:22:49
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:22:49.022212   84112 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:22:49.022483   84112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:49.022494   84112 out.go:374] Setting ErrFile to fd 2...
	I1002 20:22:49.022498   84112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:49.022788   84112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	W1002 20:22:49.022959   84112 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21682-80114/.minikube/config/config.json: open /home/jenkins/minikube-integration/21682-80114/.minikube/config/config.json: no such file or directory
	I1002 20:22:49.023522   84112 out.go:368] Setting JSON to true
	I1002 20:22:49.024454   84112 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":7510,"bootTime":1759429059,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:22:49.024544   84112 start.go:140] virtualization: kvm guest
	I1002 20:22:49.026972   84112 out.go:99] [download-only-887627] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1002 20:22:49.027124   84112 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 20:22:49.027166   84112 notify.go:220] Checking for updates...
	I1002 20:22:49.028575   84112 out.go:171] MINIKUBE_LOCATION=21682
	I1002 20:22:49.030062   84112 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:22:49.031471   84112 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:22:49.032973   84112 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:22:49.034376   84112 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 20:22:49.036690   84112 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 20:22:49.036964   84112 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:22:49.060173   84112 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:22:49.060327   84112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:22:49.484687   84112 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:false NGoroutines:63 SystemTime:2025-10-02 20:22:49.474422695 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:22:49.484854   84112 docker.go:318] overlay module found
	I1002 20:22:49.486633   84112 out.go:99] Using the docker driver based on user configuration
	I1002 20:22:49.486664   84112 start.go:304] selected driver: docker
	I1002 20:22:49.486672   84112 start.go:924] validating driver "docker" against <nil>
	I1002 20:22:49.486798   84112 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:22:49.546111   84112 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:false NGoroutines:63 SystemTime:2025-10-02 20:22:49.535707092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:22:49.546296   84112 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:22:49.546841   84112 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1002 20:22:49.546988   84112 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:22:49.548838   84112 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-887627 host does not exist
	  To start a cluster, run: "minikube start -p download-only-887627"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-887627
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-072312 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-072312 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.456100431s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1002 20:22:56.766668   84100 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1002 20:22:56.766719   84100 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21682-80114/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-072312
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-072312: exit status 85 (60.593934ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-887627 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-887627 │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ delete  │ -p download-only-887627                                                                                                                                                   │ download-only-887627 │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │ 02 Oct 25 20:22 UTC │
	│ start   │ -o=json --download-only -p download-only-072312 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-072312 │ jenkins │ v1.37.0 │ 02 Oct 25 20:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/02 20:22:53
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 20:22:53.351222   84450 out.go:360] Setting OutFile to fd 1 ...
	I1002 20:22:53.351495   84450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:53.351506   84450 out.go:374] Setting ErrFile to fd 2...
	I1002 20:22:53.351510   84450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 20:22:53.351775   84450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 20:22:53.352295   84450 out.go:368] Setting JSON to true
	I1002 20:22:53.353189   84450 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":7514,"bootTime":1759429059,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 20:22:53.353279   84450 start.go:140] virtualization: kvm guest
	I1002 20:22:53.355053   84450 out.go:99] [download-only-072312] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 20:22:53.355204   84450 notify.go:220] Checking for updates...
	I1002 20:22:53.356328   84450 out.go:171] MINIKUBE_LOCATION=21682
	I1002 20:22:53.357487   84450 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 20:22:53.358724   84450 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 20:22:53.359874   84450 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 20:22:53.360916   84450 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 20:22:53.362809   84450 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 20:22:53.363048   84450 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 20:22:53.385772   84450 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 20:22:53.385907   84450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:22:53.440504   84450 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-02 20:22:53.429715762 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:22:53.440669   84450 docker.go:318] overlay module found
	I1002 20:22:53.442453   84450 out.go:99] Using the docker driver based on user configuration
	I1002 20:22:53.442481   84450 start.go:304] selected driver: docker
	I1002 20:22:53.442487   84450 start.go:924] validating driver "docker" against <nil>
	I1002 20:22:53.442566   84450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 20:22:53.500751   84450 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-02 20:22:53.49122009 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 20:22:53.500940   84450 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1002 20:22:53.501455   84450 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1002 20:22:53.501627   84450 start_flags.go:984] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 20:22:53.503560   84450 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-072312 host does not exist
	  To start a cluster, run: "minikube start -p download-only-072312"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-072312
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-272222 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-272222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-272222
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1002 20:22:57.828394   84100 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-809560 --alsologtostderr --binary-mirror http://127.0.0.1:39541 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-809560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-809560
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-436069
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-436069: exit status 85 (54.622296ms)

                                                
                                                
-- stdout --
	* Profile "addons-436069" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-436069"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-436069
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-436069: exit status 85 (51.981737ms)

                                                
                                                
-- stdout --
	* Profile "addons-436069" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-436069"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 status: exit status 6 (285.453169ms)

                                                
                                                
-- stdout --
	nospam-461767
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:39:56.760551   96123 status.go:458] kubeconfig endpoint: get endpoint: "nospam-461767" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 status" failed: exit status 6
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 status
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 status: exit status 6 (284.670932ms)

                                                
                                                
-- stdout --
	nospam-461767
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:39:57.045180   96232 status.go:458] kubeconfig endpoint: get endpoint: "nospam-461767" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 status" failed: exit status 6
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 status
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 status: exit status 6 (291.41067ms)

                                                
                                                
-- stdout --
	nospam-461767
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 20:39:57.337400   96342 status.go:458] kubeconfig endpoint: get endpoint: "nospam-461767" does not appear in /home/jenkins/minikube-integration/21682-80114/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 pause
--- PASS: TestErrorSpam/pause (1.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 unpause
--- PASS: TestErrorSpam/unpause (1.31s)

                                                
                                    
x
+
TestErrorSpam/stop (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 stop: (1.210498436s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-461767 --log_dir /tmp/nospam-461767 stop
--- PASS: TestErrorSpam/stop (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21682-80114/.minikube/files/etc/test/nested/copy/84100/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-012915 cache add registry.k8s.io/pause:3.3: (1.090763077s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-012915 /tmp/TestFunctionalserialCacheCmdcacheadd_local3810657725/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 cache add minikube-local-cache-test:functional-012915
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 cache delete minikube-local-cache-test:functional-012915
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-012915
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (274.786811ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs
--- PASS: TestFunctional/serial/LogsCmd (0.89s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 logs --file /tmp/TestFunctionalserialLogsFileCmd4291443595/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 config get cpus: exit status 14 (67.955476ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 config get cpus: exit status 14 (58.208666ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-012915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (166.558699ms)

                                                
                                                
-- stdout --
	* [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:07:06.579933  127538 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:07:06.581543  127538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.581697  127538 out.go:374] Setting ErrFile to fd 2...
	I1002 21:07:06.581715  127538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.581966  127538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:07:06.582498  127538 out.go:368] Setting JSON to false
	I1002 21:07:06.583371  127538 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10168,"bootTime":1759429059,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:07:06.583481  127538 start.go:140] virtualization: kvm guest
	I1002 21:07:06.585564  127538 out.go:179] * [functional-012915] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1002 21:07:06.587660  127538 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:07:06.587701  127538 notify.go:220] Checking for updates...
	I1002 21:07:06.590528  127538 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:07:06.591939  127538 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:07:06.593257  127538 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:07:06.594482  127538 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:07:06.595713  127538 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:07:06.597186  127538 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:06.597725  127538 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:07:06.624556  127538 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:07:06.624769  127538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:07:06.687787  127538 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:07:06.676257904 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:07:06.687924  127538 docker.go:318] overlay module found
	I1002 21:07:06.689985  127538 out.go:179] * Using the docker driver based on existing profile
	I1002 21:07:06.691380  127538 start.go:304] selected driver: docker
	I1002 21:07:06.691397  127538 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:07:06.691543  127538 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:07:06.693400  127538 out.go:203] 
	W1002 21:07:06.694497  127538 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 21:07:06.695650  127538 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012915 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-012915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-012915 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (161.672057ms)

                                                
                                                
-- stdout --
	* [functional-012915] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21682
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 21:07:06.995028  127793 out.go:360] Setting OutFile to fd 1 ...
	I1002 21:07:06.995116  127793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.995122  127793 out.go:374] Setting ErrFile to fd 2...
	I1002 21:07:06.995128  127793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1002 21:07:06.995487  127793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
	I1002 21:07:06.995965  127793 out.go:368] Setting JSON to false
	I1002 21:07:06.996965  127793 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10168,"bootTime":1759429059,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 21:07:06.997080  127793 start.go:140] virtualization: kvm guest
	I1002 21:07:06.999028  127793 out.go:179] * [functional-012915] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1002 21:07:07.000503  127793 notify.go:220] Checking for updates...
	I1002 21:07:07.000539  127793 out.go:179]   - MINIKUBE_LOCATION=21682
	I1002 21:07:07.002031  127793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 21:07:07.003411  127793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig
	I1002 21:07:07.004900  127793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube
	I1002 21:07:07.006037  127793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 21:07:07.007128  127793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 21:07:07.008912  127793 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1002 21:07:07.009362  127793 driver.go:421] Setting default libvirt URI to qemu:///system
	I1002 21:07:07.034759  127793 docker.go:123] docker version: linux-28.5.0:Docker Engine - Community
	I1002 21:07:07.034869  127793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1002 21:07:07.097804  127793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-02 21:07:07.08803788 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652170752 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-12 Labels:[] ExperimentalBuild:false ServerVersion:28.5.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1002 21:07:07.097914  127793 docker.go:318] overlay module found
	I1002 21:07:07.101324  127793 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1002 21:07:07.102629  127793 start.go:304] selected driver: docker
	I1002 21:07:07.102647  127793 start.go:924] validating driver "docker" against &{Name:functional-012915 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-012915 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1002 21:07:07.102753  127793 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 21:07:07.104576  127793 out.go:203] 
	W1002 21:07:07.105751  127793 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 21:07:07.107027  127793 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 addons list -o json
I1002 21:07:04.819624   84100 retry.go:31] will retry after 5.292607855s: Temporary Error: Get "http:": http: no Host in request URL
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh -n functional-012915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 cp functional-012915:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd418601657/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh -n functional-012915 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh -n functional-012915 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/84100/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "sudo cat /etc/test/nested/copy/84100/hosts"
I1002 21:07:14.689538   84100 retry.go:31] will retry after 11.060666762s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/84100.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "sudo cat /etc/ssl/certs/84100.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/84100.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "sudo cat /usr/share/ca-certificates/84100.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/841002.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "sudo cat /etc/ssl/certs/841002.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/841002.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "sudo cat /usr/share/ca-certificates/841002.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 ssh "sudo systemctl is-active docker": exit status 1 (266.06681ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 ssh "sudo systemctl is-active containerd": exit status 1 (272.914671ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-012915 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "334.864045ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "53.640364ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "331.465899ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.42018ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012915 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012915 image ls --format short --alsologtostderr:
I1002 21:07:16.310297  133276 out.go:360] Setting OutFile to fd 1 ...
I1002 21:07:16.310561  133276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:16.310571  133276 out.go:374] Setting ErrFile to fd 2...
I1002 21:07:16.310576  133276 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:16.310806  133276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
I1002 21:07:16.311561  133276 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:16.311711  133276 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:16.312259  133276 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
I1002 21:07:16.330377  133276 ssh_runner.go:195] Run: systemctl --version
I1002 21:07:16.330443  133276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
I1002 21:07:16.348829  133276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
I1002 21:07:16.450586  133276 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012915 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012915 image ls --format table --alsologtostderr:
I1002 21:07:16.672488  133478 out.go:360] Setting OutFile to fd 1 ...
I1002 21:07:16.672766  133478 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:16.672775  133478 out.go:374] Setting ErrFile to fd 2...
I1002 21:07:16.672779  133478 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:16.672975  133478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
I1002 21:07:16.673533  133478 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:16.673637  133478 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:16.673997  133478 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
I1002 21:07:16.693384  133478 ssh_runner.go:195] Run: systemctl --version
I1002 21:07:16.693445  133478 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
I1002 21:07:16.712943  133478 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
I1002 21:07:16.814228  133478 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012915 image ls --format json --alsologtostderr:
[{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c0
2b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["regist
ry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7a
e1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012915 image ls --format json --alsologtostderr:
I1002 21:07:16.532933  133381 out.go:360] Setting OutFile to fd 1 ...
I1002 21:07:16.533210  133381 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:16.533220  133381 out.go:374] Setting ErrFile to fd 2...
I1002 21:07:16.533224  133381 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:16.533492  133381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
I1002 21:07:16.534149  133381 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:16.534256  133381 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:16.534697  133381 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
I1002 21:07:16.554717  133381 ssh_runner.go:195] Run: systemctl --version
I1002 21:07:16.554806  133381 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
I1002 21:07:16.573425  133381 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
I1002 21:07:16.674665  133381 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012915 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012915 image ls --format yaml --alsologtostderr:
I1002 21:07:16.757396  133527 out.go:360] Setting OutFile to fd 1 ...
I1002 21:07:16.757666  133527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:16.757676  133527 out.go:374] Setting ErrFile to fd 2...
I1002 21:07:16.757680  133527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:16.757909  133527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
I1002 21:07:16.758481  133527 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:16.758572  133527 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:16.758986  133527 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
I1002 21:07:16.776605  133527 ssh_runner.go:195] Run: systemctl --version
I1002 21:07:16.776655  133527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
I1002 21:07:16.794945  133527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
I1002 21:07:16.895431  133527 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 ssh pgrep buildkitd: exit status 1 (267.948403ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr: (1.719803458s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2a5e508af76
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-012915
--> 287e776e430
Successfully tagged localhost/my-image:functional-012915
287e776e4309752083db90a9ff2fdf6d043f5099d14cc02150c96afcf4607c09
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-012915 image build -t localhost/my-image:functional-012915 testdata/build --alsologtostderr:
I1002 21:07:17.166058  133790 out.go:360] Setting OutFile to fd 1 ...
I1002 21:07:17.166202  133790 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:17.166210  133790 out.go:374] Setting ErrFile to fd 2...
I1002 21:07:17.166216  133790 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1002 21:07:17.166561  133790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21682-80114/.minikube/bin
I1002 21:07:17.167432  133790 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:17.168190  133790 config.go:182] Loaded profile config "functional-012915": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1002 21:07:17.168814  133790 cli_runner.go:164] Run: docker container inspect functional-012915 --format={{.State.Status}}
I1002 21:07:17.187483  133790 ssh_runner.go:195] Run: systemctl --version
I1002 21:07:17.187561  133790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-012915
I1002 21:07:17.206322  133790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21682-80114/.minikube/machines/functional-012915/id_rsa Username:docker}
I1002 21:07:17.312595  133790 build_images.go:161] Building image from path: /tmp/build.584809423.tar
I1002 21:07:17.312676  133790 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 21:07:17.321453  133790 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.584809423.tar
I1002 21:07:17.325500  133790 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.584809423.tar: stat -c "%s %y" /var/lib/minikube/build/build.584809423.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.584809423.tar': No such file or directory
I1002 21:07:17.325531  133790 ssh_runner.go:362] scp /tmp/build.584809423.tar --> /var/lib/minikube/build/build.584809423.tar (3072 bytes)
I1002 21:07:17.343880  133790 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.584809423
I1002 21:07:17.351983  133790 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.584809423 -xf /var/lib/minikube/build/build.584809423.tar
I1002 21:07:17.360061  133790 crio.go:315] Building image: /var/lib/minikube/build/build.584809423
I1002 21:07:17.360128  133790 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-012915 /var/lib/minikube/build/build.584809423 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1002 21:07:18.813164  133790 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-012915 /var/lib/minikube/build/build.584809423 --cgroup-manager=cgroupfs: (1.453007858s)
I1002 21:07:18.813253  133790 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.584809423
I1002 21:07:18.821192  133790 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.584809423.tar
I1002 21:07:18.828502  133790 build_images.go:217] Built localhost/my-image:functional-012915 from /tmp/build.584809423.tar
I1002 21:07:18.828531  133790 build_images.go:133] succeeded building to: functional-012915
I1002 21:07:18.828547  133790 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-012915
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdspecific-port4242879431/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.667776ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 21:07:11.391268   84100 retry.go:31] will retry after 339.340435ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdspecific-port4242879431/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 ssh "sudo umount -f /mount-9p": exit status 1 (275.446758ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-012915 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdspecific-port4242879431/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image rm kicbase/echo-server:functional-012915 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-012915 ssh "findmnt -T" /mount1: exit status 1 (333.857739ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1002 21:07:13.089935   84100 retry.go:31] will retry after 488.859345ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-012915 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-012915 /tmp/TestFunctionalparallelMountCmdVerifyCleanup738901749/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-012915 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-012915 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-012915
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-012915
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-012915
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-018093 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-018093 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.22s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-018093 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-018093 --output=json --user=testUser: (1.221402598s)
--- PASS: TestJSONOutput/stop/Command (1.22s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-709461 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-709461 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (66.920199ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5f7bf21a-d529-424c-9f92-a57379456a76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-709461] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec5ea65b-c40a-4e28-97ab-dddd08b15f76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21682"}}
	{"specversion":"1.0","id":"23e2c8eb-45ce-4aed-bddf-4385f5375ee0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e8bd8a03-7876-4fcc-9113-5be336a3276b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21682-80114/kubeconfig"}}
	{"specversion":"1.0","id":"7c323d5b-7b35-44f3-bddf-e97ae47c5681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21682-80114/.minikube"}}
	{"specversion":"1.0","id":"935a7c05-cdb2-4c2a-a041-ededaea4b964","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"cf3e81fc-6f86-41a5-99e7-c57616eca50a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5cc29c63-66c1-4803-86f5-17fde5306b1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-709461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-709461
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.75s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-519978 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-519978 --network=: (26.626167677s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-519978" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-519978
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-519978: (2.103332241s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.75s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.69s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-325398 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-325398 --network=bridge: (21.730051987s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-325398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-325398
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-325398: (1.943219835s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.69s)

                                                
                                    
x
+
TestKicExistingNetwork (24s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1002 21:44:04.647459   84100 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1002 21:44:04.664648   84100 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1002 21:44:04.664754   84100 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1002 21:44:04.664797   84100 cli_runner.go:164] Run: docker network inspect existing-network
W1002 21:44:04.681313   84100 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1002 21:44:04.681357   84100 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1002 21:44:04.681375   84100 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1002 21:44:04.681544   84100 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1002 21:44:04.698926   84100 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016a6810}
I1002 21:44:04.698975   84100 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1002 21:44:04.699047   84100 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1002 21:44:04.756863   84100 network_create.go:108] docker network existing-network 192.168.49.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-902416 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-902416 --network=existing-network: (21.911294757s)
helpers_test.go:175: Cleaning up "existing-network-902416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-902416
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-902416: (1.947766037s)
I1002 21:44:28.633323   84100 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.00s)

                                                
                                    
x
+
TestKicCustomSubnet (27.67s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-392519 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-392519 --subnet=192.168.60.0/24: (25.556617037s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-392519 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-392519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-392519
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-392519: (2.094046566s)
--- PASS: TestKicCustomSubnet (27.67s)

                                                
                                    
x
+
TestKicStaticIP (26.12s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-321900 --static-ip=192.168.200.200
E1002 21:45:05.853228   84100 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21682-80114/.minikube/profiles/functional-012915/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-321900 --static-ip=192.168.200.200: (23.923322838s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-321900 ip
helpers_test.go:175: Cleaning up "static-ip-321900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-321900
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-321900: (2.063811465s)
--- PASS: TestKicStaticIP (26.12s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-998398 --memory=3072 --mount-string /tmp/TestMountStartserial2782729643/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-998398 --memory=3072 --mount-string /tmp/TestMountStartserial2782729643/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.97087566s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-998398 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-014210 --memory=3072 --mount-string /tmp/TestMountStartserial2782729643/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-014210 --memory=3072 --mount-string /tmp/TestMountStartserial2782729643/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.130978171s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014210 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-998398 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-998398 --alsologtostderr -v=5: (1.666662179s)
--- PASS: TestMountStart/serial/DeleteFirst (1.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014210 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-014210
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-014210: (1.195817254s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.75s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-014210
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-014210: (6.752962172s)
--- PASS: TestMountStart/serial/RestartStopped (7.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014210 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    

Test skip (18/166)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
Copied to clipboard